The Violence Project
Page 16
In the end, the Isla Vista massacre was revenge for perceived injustices, bullying, female rejection, and public humiliation. “After I picked up the handgun,” the shooter said, “I brought it back to my room and felt a new sense of power. I was now armed. Who’s the alpha male now, bitches? I thought to myself, regarding all of the girls who’ve looked down on me in the past.” His actions quickly inspired copycats. In 2017, a twenty-one-year-old walked into his former New Mexico high school and shot and killed two students. In online forums, he had used the name of the Isla Vista shooter as his pseudonym, as well as “Future Mass Shooter.” In 2020, a self-described incel targeted romantic couples at Westgate Entertainment District, a mixed-use development in Glendale, Arizona. He shot and injured two people in front of a restaurant, fired additional shots, and then shot a third person in a parking lot.
Most notably, in 2018, a twenty-five-year-old drove a van onto a busy commercial street in Toronto, killing ten people and wounding sixteen. He later told police the attack was retribution for years of rejection by women and that he identified as a member of the incel movement. In a message he posted on Facebook just before his rampage, the man cited the “Supreme Gentleman,” a term the Isla Vista shooter had used to describe himself, as his inspiration: “The Incel Rebellion has already begun! We will overthrow all the Chads and Stacys! All hail the Supreme Gentleman . . .”
—
Tackling the hate that underlies some mass shootings is tricky. In a classic experiment, a team of psychologists asked people to read a series of studies that seemed either to support or to reject the idea that capital punishment deters crime.20 They found that people readily accepted any data that supported their initial beliefs, but rejected any information that opposed them, thus leaving participants even more convinced of their opinions and even more polarized. In other words, trying to correct misperceptions can actually reinforce them.
Researchers call this a “backfire effect,” whereby individuals hold fast to their perceptions, whether true or false, even when presented with evidence to the contrary, becoming increasingly intransigent the more they are presented with countervailing evidence.21 They do this because they have already invested so much of themselves into a particular position, such as racism, misogyny, or homophobia, that any evidence to suggest they invested in the wrong position threatens their very identities. When people hold inconsistent beliefs, cognitive dissonance suggests that they almost always side with what is most comfortable instead of what is true in order to alleviate the arising tension.22
However, people can and do change their minds. There is no question the antidote to racism, sexism, anti-Semitism, and other hateful viewpoints is critical thinking. One approach is to “cognitively empower” people by encouraging them to think analytically and to consider available evidence more carefully.23 Investments in young people’s cultural awareness and media literacy and in countervailing messaging of tolerance and unity, which reassures victims and shuns perpetrators, will very likely take a bite out of hate-motivated attacks.
Social media companies could “de-platform” certain groups and individuals, thereby denying them access to a venue in which to espouse their hateful rhetoric in the first place. We saw this in January 2021, when President Donald Trump was de-platformed from Twitter and Facebook for inciting an insurrection with the intent to overthrow a fair and free election, and when Amazon’s cloud computing service pulled support for the “free speech” social network Parler, an alternative to Twitter that was popular among Trump supporters and was implicated in the 2021 storming of the U.S. Capitol. De-platforming obviously works—it largely silenced a sitting U.S. president during his final weeks in office—but making editorial decisions does raise thorny moral and legal questions for tech companies, which until now have been treated as platforms for, not publishers of, third-party content, and have been loosely regulated to avoid not limiting debate.
Some people argue that de-platforming from mainstream sites will simply force people into darker and even less regulated sites, where the social media echo chamber is amplified. Research finds that “hate clusters” often regenerate and spread across platforms, even when they are banned.24 However, Megan Squire, professor of computer science at Elon University and an expert in online extremism, argues that de-platforming seriously undermines hate because extremists need mainstream social media platforms to normalize their ideas and spread them to the largest audience.25 They are helped in this when mainstream figures, even presidents, retweet extremists’ words or refuse to denounce them.
When Twitter and Facebook let extremists’ profiles remain active, the companies lend the credibility of their online communities to them. It is much harder for fringe groups and individuals to appear “normal,” and for everyday people to be recruited into extremist groups, if those groups are buried in the depths of the internet. Mainstream platforms that combine both public and private means of communication, such as public posts, private groups, and direct messaging, also allow for a seamless pivot between front stage propaganda and backstage planning and organization. De-platforming thus disrupts both functions by deleting an extremist’s Rolodex of fellow extremists.
Look at some of the most recent hate-motivated mass shootings, and you’ll see that nearly every shooter posted some kind of indication of their hateful thinking on the internet in the days and weeks leading up to the event. Some shared complete “manifestos” online, stating their political or religious beliefs with undertones of malice and hatred. Others spelled out their violent intentions explicitly.
In the wake of the 2019 mass shooting in El Paso, which was preceded by the shooter posting a racist screed full of white supremacist talking points on 8chan, a hate-filled online message board, President Trump called on social media companies, who run massive platforms and can sift through the personal data of billions of people, to “detect mass shooters before they strike.” The president wanted private enterprises to develop new algorithmic tools for surfacing “red flags” that could enable the government to act earlier to prevent mass casualties.
Determining whether threats of violence on social media are credible is time- and labor-intensive work. In most cases, law enforcement still relies on tips—someone saw the threat and contacted authorities about it. And while social media companies flag to law enforcement those items they suspect indicate a specific threat, there are no federal laws requiring them to alert authorities or to take any other action in response to threats of violence posted on their platforms. This leaves law enforcement either out of the loop entirely or forced to subpoena companies for more information as needed—a cumbersome process when time is of the essence.
We’re at a critical juncture to change this. Technology companies such as Facebook (which also owns Instagram), Google (which owns YouTube), and Twitter are waking up to the reputational risks of being associated with hate speech and other harmful content and are increasingly devoting considerable resources to removing it. But as private companies, these platforms are beholden to their own internal hate speech and violence policies. The decision about whether to remove content or ban a user falls largely on hired content moderators, who manually review any flagged material using predefined guidelines.
Adjudicating among thousands of triggering and traumatizing posts that are in potential violation of company policy every day is difficult work,26 made no easier by the fact there is little consensus on what actually constitutes hate speech and that any context or nuance can drastically alter the meaning of words posted online. The limits of free expression are difficult to craft into law, but if the government could do this, a significant advance will have been made.
If someone does post to social media immediately before they decide to unleash violence, it’s often not something that would trip many company policies as they are currently written, because social media companies, champions of free speech, rarely punish what people say they might do, only what they’ve actually done. In August 20
20, a militia page advocating for followers to bring guns to oppose a Black Lives Matter protest in Kenosha, Wisconsin, was flagged to Facebook at least 455 times after its creation, but it was cleared by no fewer than four moderators, all of whom deemed it “non-violating.” The page and event were eventually removed from the platform . . . several hours after a seventeen-year-old allegedly shot and killed two protesters.27
With the benefit of hindsight, content moderators clearly got that one wrong. The question is: Can our hindsight become someone else’s foresight? Banning accounts and/or flagging them to authorities before anyone makes a solid threat against a person or group moves us into Minority Report territory, where police apprehend criminals for crimes not yet committed based on precognition. Is America comfortable leaving it up to Silicon Valley to decide who is and is not the next mass shooter? After all, social media companies are precisely that: companies. They have a singular interest in creating shareholder value. Facebook, Twitter, and other social media platforms were created with the simple goal of connecting people online, but because more human attention and engagement mean more advertising dollars (their primary source of income in the absence of subscription and usage fees), they’ve done little to date to protect their users.28
Social media platforms are designed to profit from a form of confirmation bias, the natural human tendency to seek, “like,” and share new information in accordance with preexisting beliefs. To keep us online, they rely on adaptive algorithms that assess our interests and flood us with content that is similar to what we liked before. This makes it difficult for extremists to kick old habits, like extremism. Even if someone wants to avoid hate online, personalized search results based on past click behavior and history instead create “filter bubbles” that make hate unavoidable.29 The social media echo chamber provides reaffirmation for hate by silencing outside voices and contradicting any intervention’s countervailing messaging. Algorithms promote content that sparks outrage and amplifies biases within the data that users feed them.
Algorithms are more a problem than a solution, it seems. There’s an important difference between tipping authorities off when someone posts a concrete threat of violence and using “big data” to identify who could potentially be a shooter. This assumes the algorithms can even get it right. There is a tendency to think of machine learning as a cure-all to expedite decision-making and mitigate existing human fallibility in the process. The fact is, algorithmic tools are based on decisions and data, and that makes them no more objective than the humans who create them.30
An algorithm is only as good as its trainer, whose own preferences are baked into the codes they write, and deep learning is only as good as its data. Tech firms have a lot of data (probably too much), but that does not necessarily mean it is good data, fit for this purpose. We explicitly caution against using our data for predictive modeling—and we’ve curated the largest, most comprehensive database on mass shooters. The reason for this? Many of the factors correlated with mass shootings, from childhood trauma to gun ownership, are true for millions of people who never commit mass shootings. Mass shootings also are extreme and rare events. There are not enough outputs to balance all the inputs, which is precisely what predictive algorithms need to be able to predict.
In the end, the best defense against extremism is to be found within ourselves and in the cohesive and multicultural communities we create. Extremism is not something foreign to our society, but instead part of it. We don’t have to hunt down hate to find it; we need only recognize that it is loosely directed even when it is tightly held. Perpetrators of mass shootings motivated by hate tend to have a loose affiliation and weak ideological belief system. Their true hatred is self-loathing, anger, frustration, and hopelessness. This opens the door to intervention because the people saying hateful and hurtful things, online and in person, are really just projecting an underlying unhappiness with themselves, not a strong conviction about others. To stop mass shootings motivated by hate, we must embrace the complex reality that online activity is rooted in real world experience. The online and offline lives of mass shooters are not mutually exclusive, but rather one and the same. For any intervention to be successful, it must reach people where they are at, both in digital space and in physical space. It must also reach them earlier, before lost souls ever go searching for hateful narratives to make sense of their lives, and before a mass shooting is ever on the horizon.
CHAPTER 8
OPPORTUNITY
In his book about the scourge of urban gun violence, Bleeding Out, Thomas Abt, a senior fellow at the Council on Criminal Justice, uses a powerful metaphor.1 If a young man is rushed into an emergency room, dying from multiple gunshot wounds, the doctor doesn’t start listing off all the social problems that got him there—a broken home, poverty in the neighborhood, a lack of education, employment, and training opportunities—even if, technically, they might be root causes. No, instead the doctor says, “We’ve got to stop the bleeding.” Because that’s the first step. You address the immediate emergency, and then you work outward from there.
Well, when it comes to mass shootings, in 2020 we stopped the bleeding. U.S. mass shootings hit a record high in 2018, with nine, including the Valentine’s Day massacre at a high school in Parkland, Florida. The second-most shootings in a year prior to 2018 was seven in 2017, the year we started this research, which was also the deadliest year on record after the unprecedented shooting that took place in Las Vegas. There were also seven mass shootings in 2019, three with large death tolls in the month of August alone. And so the worst years on record for mass shootings were 2017, 2018, and 2019. (1999 also had seven shootings.)
The year 2020 started much like 2019 ended, with a mass shooting on February 26, when an employee at a brewery in Milwaukee killed five coworkers. Then, on March 15, a man killed four people at a gas station in Springfield, Missouri, before killing himself. America was at risk of bleeding out. But then, suddenly, unexpectedly, the United States fell into the grip of a global pandemic—and the shooting stopped.
The novel coronavirus created an interesting natural experiment. It did two things to mass shootings. One, it curtailed the opportunity for them. Criminologists once assumed that opportunity merely determined when and where a crime occurred. However, four decades of research show that opportunity actually causes crime.2 With people’s movements restricted; with schools, businesses, bars and restaurants, places of worship, and other possible shooting sites all closed; and with the vast majority of Americans staying indoors, COVID-19 took the masses away from mass shootings.
The other thing the novel coronavirus did was stop the contagion. Prior to 2020, America’s fear of and fascination with mass shootings was fueling other mass shootings in three ways. First, one mass shooting provided social proof for another mass shooting—and so the next mass shooting inevitably followed the last, whether in style or substance. Second, intense media coverage of mass shootings led to more people seeking to become copycat killers. And third, endless discussion and excessive worry over the risk of mass shootings fed daily mass shooting routines, such as active shooter drills, which, in turn, planted the seed that mass shootings were normal, a legitimate way of handling grievances if someone was angry and struggling. COVID-19 broke this cycle, not least because, in 2020, everyone was angry and struggling, keeping mass shooters out of the headlines and out of our heads.
We’ve seen this break in routine before. Northeastern University criminologist James Alan Fox recalls how the September 11, 2001, terrorist attacks deflected attention from an alarming sequence of school shootings in the late 1990s and early 2000s, including Columbine, and there wasn’t another multiple-victim K–12 school shooting for four years.3
This time, unfortunately, the break didn’t last. In March 2021, as the pandemic eased, just as businesses and workplaces began reopening and people started gathering in larger numbers, we had two mass shootings in a week—the first at three Atlanta-area spas that left eight p
eople dead, then a shooting at a grocery store in Boulder, Colorado, that killed ten. The return to public life meant a return to the routine of mass shootings, in part because lawmakers did nothing in between to prevent them, but also because the pandemic exacerbated many risk factors for violence, such as social isolation and economic hardship.
But there is still an important lesson to be learned from mass shooting trends in 2020. We obviously cannot shut down public life and stay home permanently wrapped in personal protective equipment, trading one form of mass death for another. Because we’ve stopped the bleeding before, however, we can apply the same opportunity principle, grounded in the science of situational crime prevention, to help stop the bleeding again.
—
In the early twentieth century, the United Kingdom heated domestic ovens with coal gas, which contained lethal levels of carbon monoxide. By the late 1950s, more than half of all suicides there—about the same proportion as firearm suicides in the United States today—involved someone putting their head in an oven, to use the common expression of the day, because it offered a quick, painless, bloodless means of death.
Then, in the 1960s, the government began, incidentally, to replace manufactured gas with cleaner, natural gas from the North Sea, which was virtually free of carbon monoxide. By 1977, less than one half of 1 percent of suicides used domestic gas, and the overall national suicide rate fell by a third.4
Twenty years later, the United Kingdom also changed the packaging for a popular over-the-counter painkiller to require “blister packs” for packages of sixteen pills when they were sold in places like convenience stores, and for packages of thirty-two pills in pharmacies. Big bottles made it easy to pour out many loose pills at once and were implicated in hundreds of deliberate and accidental overdoses each year. Blister packs meant pills had to be popped out one by one, making it a long, slow process to pop out enough pills to die by suicide.5 A study by Oxford University found that suicide deaths from paracetamol overdoses fell by 43 percent over the next decade.6 A similar decline was found in accidental deaths from medication poisonings, and overdose-related liver transplants dropped by 61 percent.