ISIS
Page 15
But regardless of the big-think debate, public outrage fueled scrutiny, and scrutiny led to changes, at least if a company was big enough and its terrorist users active enough to make headlines.
Literally every social media platform of meaningful size hosted some number of violent extremists. But most scrutiny was directed at the top. The easy availability of white supremacist “hatecore” music on the once-popular social media service Myspace, for instance, generated little public interest, in part because the platform was seen as fading into obsolescence and in part because newspaper reporters were far less interested in covering white nationalists than jihadists.24
San Francisco–based file-sharing service Archive.org was often the very first place where jihadi media releases appeared, but few outside of counterterrorism circles paid the clunky-looking website much heed, and even jihadis wasted no time transferring their videos from Archive to YouTube once they were published.25
Headline-friendly services such as Facebook and Twitter took the brunt of the criticism, in part because they were becoming extremely popular venues where terrorist recruiters and supporters could operate, and in part simply because they were popular. Everyone knew about Facebook and Twitter; fewer knew or cared about Tumblr, the blogging service that hosted its fair share of jihadi outlets.
Facebook and Google, while generally favoring free speech, were also publicly traded companies with concerns about liability and a desire to create safe spaces for users, especially the young, who were vulnerable to a range of online predators of which violent extremists and recruiters were only one part.
Twitter stood apart. A privately held company until late 2013, Twitter’s founders and executives were perceived as libertarian-leaning advocates for free speech. Twitter more aggressively resisted broad government requests for information than most, and its rules for users contained few restrictions on speech.26
The company refused to discuss its criteria for suspensions, brushing aside queries with a boilerplate response.27 With the exception of spam and direct personal threats against individuals, users could get away with a lot. And Twitter’s “who to follow” recommendations for new users made it easy for would-be radicals to jump right in and start making connections with hardened terrorists, a process that was much more difficult in the 1980s and 1990s.28
The Taliban was one of the first jihadist-oriented organizations to embrace Twitter. In January 2011, its official media outlet created a Twitter account,29 soon followed by its spokesman, Abdulqahar Balkhi.30 Balkhi quickly became part of a sensation when NATO’s International Security Assistance Force (ISAF) account began publicly sparring with him.31 Toward the end of 2011, the Somali jihadist insurgent group al Shabab followed suit, and it soon racked up tens of thousands of followers.32
As highly visible insurgencies, rather than shadowy terrorist cabals, Shabab and the Taliban needed to manage public relations. They used their accounts to brag about military victories, harass their enemies, and rally supporters from their respective regions and around the world.
It wasn’t all upside. In 2012, as described earlier, American al Shabab member Omar Hammami broke with the group over differences in methodology and accusations of corruption in Shabab’s upper ranks. He took to social media to publicize the charges, airing Shabab’s dirty laundry and launching an extended conversation with Western counterterrorism analysts, including a long series of public and private exchanges with coauthor J. M. Berger.33
While journalists and academics had, over the years, cultivated sources within terrorist groups, the advent of social media had opened the door to different types of interactions, exchanges that could involve daily or weekly conversation over the course of months. Social media also brought with it new risks, the danger of becoming too publicly or privately friendly with sources, at the risk of giving them a higher profile or being perceived as validating their views.
The more inherently secretive al Qaeda also established a presence on Twitter, along with some of its affiliates, but more covertly, resulting in a more limited reach. This lack of connectivity helped fuel the beginnings of dissent, as Hammami and other internal al Qaeda dissidents took to social media to air their grievances, only to be met by conspicuous silence (see Chapter 3).
After a slow beginning, Facebook took an aggressive stance against violent jihadists starting in 2009, actively monitoring, seeking out, and terminating pages and groups devoted to terrorist content, even when they were hidden from public view by privacy settings. It also terminated the accounts of key users who participated in such activities.34
Many of those suspended users simply sat down at their computers the very next day, created new accounts, and started all over again. So what was the point?
WHACK-A-MOLE
The phrase “whack-a-mole” had been used since the early 1990s to describe one of the major challenges of counterterrorism writ large.35 A children’s arcade game, Whac-A-Mole (sans k) features a table-sized playing field covered with holes. Toy moles pop out of the holes at random, first one at a time, then more and more, coming faster and faster.
The self-evident object of the game is to whack the moles with an included mallet as soon as they pop up. Inevitably, the moles begin to come faster than the player can whack them, and the player loses.
The dynamics of fighting terrorist groups are similar. Cracking down on a successful terrorist organization rarely led to the end of its associated movement. Take one cell out and new ones sprouted from the remains of the first. The CIA more elegantly described the problem in a secret 1985 internal analysis titled “The Predicament of the Terrorism Analyst,” which compared the splintering of violent extremist groups under government pressure to the many-headed Hydra of legend—cut one head off and two more grow to take its place.36
While the Hydra metaphor continues to have its fans, “whack-a-mole” made for more colorful sound bites. With the dawn of the twenty-first century, it quickly became ubiquitous as a phrase to casually dismiss the value of efforts to counter or suppress terrorist and extremist use of the Internet.
The debate started with the Internet service providers that al Qaeda used to host the forums. While the forums were operationally important, they were specialized. A terrorist forum didn’t come to you, you had to seek it out, sometimes armed with personal references. Some forums had lower barriers to entry, but a would-be al Qaeda member had to work his way through a series of such communities, earning trust and establishing credibility each time, which took time.
The social impact of the forums was relatively limited, while the counterterrorism benefits of allowing the forums to operate with only sporadic interference were clear. Although the forum administrators were usually based overseas, the United States offered the cheapest, easiest, and most reliable servers to host the content.
The fact that al Qaeda message boards were hosted by American companies incensed many people for reasons both political and patriotic, and some mounted public shaming campaigns in an effort to get those Internet service providers (ISPs) to take the forums down.37 But if a forum was hosted on a server based in the United States, it was fairly simple for the government to get a subpoena and start collecting highly sensitive data. None of this was visible to Internet users in general, and so the debate remained relatively low-key.38
Both the ecosystem and the calculus changed dramatically with the rise of a new generation of social media platforms. The forums were gated communities; open social media services like Facebook and Twitter were town squares, where people wandered around meeting each other and seeking out those with similar interests.
Compelling evidence suggests social media taken as a whole tends to discourage extremism in the wider population,39 but for those already vulnerable to radicalization, it creates dark pools of social connections that can be found by terrorist recruiters and influencers. On Twitter or Facebook, it was easy to seek out or stumble onto a radical or extremist account or community, and even easier for t
errorist recruiters to seek prey within mainstream society.
“I see the cyber jihad as very, very important, because Al Qaeda, the organization, became mostly an ideology,” wrote Abu Suleiman al Nasser, a prominent forum member who shifted to Twitter, in a 2011 email interview. “So we try through the media and Web sites to get more Muslims joining us and supporting the jihad, whether by the real jihad on the ground, or by media and writing, or by spreading the idea of jihad and self-defense, and so on.”40
BIG BUSINESS
Virtually all extremist and terrorist groups have staked out ground on social media, from al Qaeda to Hamas, Hezbollah, the Tamil Tigers, the Irish Republican Army, and Babbar Khalsa (a Sikh militant group).41 In a 2012 study commissioned by Google Ideas, coauthor J. M. Berger documented thousands of accounts related to white nationalist and anarchist movements on Twitter, and participation in those networks has soared in the intervening years.
As terrorists made the transition to social media, public pressure mounted. Twitter stoically sat out the debate, rarely commenting but making its libertarian views on speech well known. “One man’s terrorist is another man’s freedom fighter,” an unnamed Twitter official told Mother Jones magazine.
“We take a lot of heat on both sides of the debate,” said Twitter CEO Dick Costolo, in one of the company’s extremely rare public statements on the matter.42
YouTube and Facebook, on the other hand, quickly learned the frustrations of whack-a-mole. Although the debate over terrorist suspensions frequently revolved around the intelligence question, terrorist content on social media was a business issue first and a cultural issue second. Intelligence concerns were, at best, a distant third.
The reason: Social media is run by for-profit companies, which are neither government services nor philanthropic endeavors (even if technology evangelists sometimes lost sight of the latter fact). The owners and operators of the platforms made the vast majority of decisions about which accounts would be suspended. Government intervention represented a tiny fraction of overall activity.
Each social media service had its own rules about abusive and hostile behavior that every user was obligated to follow or else risk being banned. The companies had no motivation to carve out exceptions for terrorist users who violated the rules, nor were they much inclined to treat their users as a resource for the intelligence community.
They did, however, have reason to worry about news headlines like “Despite Ban, YouTube Is Still a Hotbed of Terrorist Group Video Propaganda” and “Facebook Used by Al Qaeda to Recruit Terrorists and Swap Bomb Recipes, Says U.S. Homeland Security Report.”43
After uneven beginnings, YouTube began to enforce its ban on terrorist incitement in a steady but less than robust manner. It responded quickly to user reports about terrorist videos, but it didn’t deploy its full technological arsenal against them.
For example, Google could have written software to recognize the logos of terrorist groups and flag them for review. It did not. More significant, Google had developed the technological capability to prevent multiple uploads of a video that had already been flagged as a violation of its terms of service. The technology was invented to deal with copyright violations, but as of November 2014, it had not been deployed for use against terrorist videos.44
Facebook became proactive and began knocking down pages, groups, and users as a matter of routine, sometimes before ordinary users had a chance to complain about them. In an attempt to get around this, jihadis set up private, members-only Facebook groups to discuss bomb-making formulas and potential terrorist targets, but blatant plotting soon became a sure ticket to swift and repeated suspensions.45
As companies formulated policies for dealing with the influx of terrorist content, a cottage industry of open-source terrorism analysts blossomed almost overnight. Some analysts outside government preferred the one-stop shopping offered by the jihadist forums, which helped weed out noise and authenticate terrorist releases, but others found the insular environment difficult to crack, often requiring the creation of secret identities and undercover profiles to gain access to the juiciest and earliest content.
In contrast, social media seemed to offer ripe fruits for easy picking, especially on Twitter, where many jihadist organizations were now routinely distributing new releases describing their battles and claiming credit for attacks.46
Many among this new breed of social media analysts had a high opinion of the intelligence provided by low-hanging terrorist accounts. The new analysts broke down into several subcategories—academics, government contractors, government officials, journalists, and a burgeoning contingent of semiprofessional aficionados.
Some outside government confused terrorist press releases—by definition, the message the group wanted to promulgate—with verified information or operational intelligence. The easiest terrorist sources to find presented stage-managed messages, including outright lies. Many highly visible accounts belonged to stay-at-home jihadists far from the front lines.
Among global government officials and intelligence workers responsible for counterterrorism and countering violent extremism—the people fighting terrorism as opposed to those who study it—attitudes were different, especially as the months turned into years. Agencies were quick to recognize the power of so-called Big Data analytics in relation to the massive social networks that were forming in front of their eyes, but few had the capabilities to exploit the new pool of information on a large scale. In a majority of cases, social media was most useful to law enforcement and intelligence agencies not as a vast hunting ground but as a resource for discovering more information about suspects they had already identified.
In the United States, the government sometimes asked companies to suspend accounts. Some of the time, at least, the social media provider had some discretion in responding to such requests. Some European countries applied existing hate speech laws to social media platforms.47 Other countries, in the Middle East and South Asia, took a more aggressive stance against speech they considered objectionable (terrorist or not).48
At times, government agencies asked to keep social media accounts active, when they were part of an ongoing investigation or when their intelligence value clearly outweighed their utility to terrorism. While shrouded in secrecy, these cases appeared to be rare and highly targeted.
Two fairly direct analogues cut to the heart of the intelligence argument for allowing terrorists to operate entirely unimpeded on social media.
The first is to substitute “terrorism” for virtually any other kind of crime or flagrant violation of the social contract. To pick an extreme example, allowing child pornographers to operate online without impediment would undoubtedly yield tremendous intelligence about child pornographers. Yet no one ever argues this is a reasonable trade-off.
In less emotionally loaded terms, the same could be said about the online operators of Nigerian oil scams, Ponzi schemes, and phishing attacks, or online purveyors of drugs, contraband, or prostitution. None of these problems are solved by online interdiction; the moles keep popping up. But no one ever argues that these social media accounts should be immune to suspension.
The second analogue is to real-life activity. Anyone who studies intelligence and law enforcement knows it is sometimes valuable to allow criminals and terrorists to remain at large for a period of time, under close surveillance, in order to gain information about their activities. But when the system is working properly, such surveillance culminates in concrete actions to prevent violence and disrupt the criminal network’s function.
Of course, the very best intelligence on terrorism is produced by investigations that follow a successful terrorist attack, but no one would argue that the intelligence gained outweighs the cost. Online, the costs and gains are degrees of magnitude smaller and considerably more ambiguous. But it is wrong to assume they do not exist or matter, and that the equation is always, or even usually, weighted toward intelligence.
Although reasonable peop
le can disagree about where to draw the lines, there is no reasonable argument for allowing terrorists complete freedom of action when alternatives are available.
WHACKED
The whack-a-mole metaphor was also flawed on its face, for two reasons: It assumes zero benefit to removing the moles temporarily, and it assumes the moles will never stop popping up.
Suspensions diminished the reach of a terrorist social media presence, degrading the group’s ability to recruit and disseminate propaganda and forcing terrorist users to waste time reconstructing their networks. The suspensions didn’t eliminate the problem, but they created obstacles for terrorists.
Killing civilians and destroying infrastructure are not typically a terrorist organization’s end goals. Rather, they are a means to provoke a political reaction. Although people understandably forget sometimes, terrorism is ultimately intended to send a message to the body politic of the target, rather than being a pragmatic effort to destroy an enemy, although there are exceptions.
Therefore depriving terrorists of media platforms at key moments—such as the release of a beheading video—disrupts their core mission.
Suspending the accounts that distribute such content requires mole whackers to keep whacking, but it also requires the moles to keep finding new holes from which to emerge, making it more difficult to land a message with the desired audience (see Chapter 7).
At the start of 2013, the debate reached a watershed. Al Qaeda’s affiliate in Somalia, al Shabab, had grown fat and complacent on Twitter, where it maintained an official account (@HSMPress) that tweeted in English and had amassed 21,000 followers.
In addition to reporting its alleged military activities, in tweets that ranged from spin-laden to fantasy, the account frequently posted taunts and threats directed at Western and Somali governments.49