Book Read Free

Messing with the Enemy

Page 10

by Clint Watts


  Having trained at the FBI Academy and studied Soviet politics, military, and intelligence at West Point, I recognized the technique as the digital update to age-old spycraft. These personas were recruiting me, virtually, through the timeless approach of “spot, assess, develop, and recruit.” Either my credentials or my conversations led these personas to spot me as an adversary needing to be compromised or a potential ally needing to be curried. By evaluating my public social media posts, they assessed whether I’d be amenable to connecting through a follower relationship allowing one-to-one communication. They then sought to develop the relationship through private direct message conversations. Sharing and discussing, these accounts hoped I’d move to their position, or, at a minimum, confirm the legitimacy of their stance. Finally, if the relationship developed as they hoped, the influence accounts intended to recruit me, wittingly or unwittingly, to promote their policy agenda in my own public messages or rebroadcast their messages in a public forum. Social media provided an avenue for these influencers to further cloak their true personas and real intentions, all with remarkably lower effort, less risk, and fewer resources than in the Cold War days of spy versus spy.

  If I wasn’t biting on a following relationship that permitted direct communications, they’d switch to publicly counter me rather than privately convert me. Countering my comments and those of other foreign policy pundits allowed them to pursue several parallel strategies. Through endless harassment, hecklers sought to make my and many other foreign policy experts’ experience so negative that we’d remove ourselves from the social media platform altogether, take ourselves out of the debate, and create more space for the heckler’s preferred foreign policy position. If we didn’t leave the platform, we’d suffer endless challenges, with the goal of leaving us virtually wounded, our credibility in question.

  A second, more involved countering technique was account compromise followed by public shaming. In this case, hackers, appearing as adoring fans, would bait targets into clicking on links, allowing them to access a target’s computer and uncover their personal secrets or, in some cases, simply take over their social media persona. Account takeovers might result in the target’s audience of supporters unfollowing the account for fear of being hacked themselves—something like public fears of communicable disease transmission. If the account takeover yielded particularly sultry and damaging secrets, the target’s personal files would be selectively slipped to a media outlet, resulting in the target’s possibly being fired from a job, hated by one’s family, and certainly unfollowed and rebuked by even the most ardent online supporters.

  Not surprisingly, during the summer of 2014, one thorny expert who’d been seen jabbing at the Kremlin had his private conversations of a sexual nature and a nude picture leaked to media outlets, a traditional technique deployed by online honeypots. The pundit immediately suffered embarrassment and investigation into potential misconduct based on his professional position. The public smear campaign worked for the moment, the professor took a break from both social media and work for a spell, and the trolls racked up another score. Did the Kremlin perpetrate this honeypot operation? Who knows. There were likely many people motivated to take down this expert, or maybe this was just another case of poor judgment, but the character assassination technique proved rampant in 2014.

  I myself had to decide during these heckler conversations, whether openly adversarial or seemingly friendly, if the risks were worth the reward. Conventional wisdom said that I should immediately block hecklers yelling at me over Syria and honeypots wanting to chat about all things pro-Russia. One wrong click would bring hackers into my computer systems, where they’d have access to all my private information, which they’d surely manipulate to discredit me publicly. I could create parallel computer systems, one protecting my personal information and another for conducting social media research, but the costs and inconvenience made work nearly impossible.

  I recalled the lessons from the Omar Hammami days: how much easier it is to understand terrorists if you talk to them rather than try to avoid them. I knew it would be far more difficult to figure out who these accounts were and what they were up to if I didn’t stay engaged with them. Sure, I might go down in social media flames, but understanding an enemy requires one to engage with them, mess with them a little bit. I kept the direct message conversations open, I didn’t click on their content, particularly links, and I’d only watch each day, praying that slipping my phone into my pocket or a bag wouldn’t accidentally cause it to click on a nefarious link. I started taking some countermeasures as well, creating misinformation on my email accounts, posting pictures misdirecting my location, and leaving some stale digital bread crumbs on my system.

  Most social media analysis in 2014 focused almost exclusively on the Islamic State, and that was where I initially encountered the Kremlin’s trolls and their Iranian and Syrian allies. The Islamic State’s social media radicalization and recruitment methods were unprecedented in the terrorism world, but they paled in comparison with what was emerging from Russia. The Islamic State mastered online shock-and-awe footage, and its coordinated dissemination to followers across a wide range of social media platforms proved unprecedented for extremists. But the hacker-honeypot-heckler collective of the Kremlin coordinated seamlessly in their targeted trolling, and they sought not to get young men to travel to Syria but to change international minds about the conflict overall—creating doubt over Syrian human rights violations, offering Assad as the only option to create a safe and secure Middle East absent of terrorism. Recruiting a few thousand disenfranchised boys into a terrorist militia had been done twice before—the Islamic State did it the best—so the effect wasn’t new. But influencing the entire world, unwittingly, to accept President Assad and his indiscriminate killing and destruction of his own country—that was an unprecedented social media undertaking. The troll army we encountered in 2014, the one hammering away at the “Alaska Back to Russia” petition, were on the cusp of a new era of digital influence—they were employing automation alongside humans, creating a new, alternative, and false perception of the world. This was master-level influence unlike any the world had seen.

  * * *

  Computational propaganda leverages social media to rapidly spread information supportive of a particular ideology. One of the first groups to study this emerging field of online influence was the Oxford Internet Institute. Phil Howard, professor and leader of its Computational Propaganda Project, defines computational propaganda as “the use of information and communication technologies to manipulate perceptions, affect cognition, and influence behavior.” This manipulation occurs through the deployment of what are known as social bots—programs, defined by a computer algorithm, that produce personas and content on social media applications that replicate a real human. These social bots have also passed the important milestone known as the Turing test, a challenge developed by Alan Turing, the great member of the British team that cracked the German Enigma code.4 The test assesses whether a machine has the ability to communicate, via text only, at a level equivalent to that of a real person, such that a computer—or, in the modern case, an artificially generated social media account—cannot be distinguished from a live person.

  The bots we observed did just that: they created artificial accounts, emulating real people, that mimicked the conversations of target audiences in several geographies around the world. Some bots were strictly automated spamming accounts that replicated the same message or a combination of messages at a standard interval. Other bots operated more as cyborgs, part human and part automation, such that an individual user would administer what may be dozens of replicated accounts, amplifying messages and selectively replying or messaging key influencers in an audience space.

  Theoretically, bots could be employed for positive purposes, such as public awareness and emergency notifications. But Howard points out the dangers of social bots, noting that political actors in democracies might use them to promote po
litical campaigns, transnational actors could easily employ them to influence foreign audiences, and authoritarians could leverage them for domestic social and political control.5 Beyond rapidly spread falsehoods throughout specified audiences, bot amplification of news stories can lead to the widespread dissemination of misinformation among mainstream media outlets, overwhelming the ability of journalists and fact-checkers to assess the veracity of an endless stream of propaganda. Clayton Davis, a researcher of computational propaganda at Indiana University, notes how bots can create a “majority illusion, where many people appear to believe something . . . which makes that thing more credible.”6 Users must then evaluate an endless series of tweets and posts repetitively, evaluating the veracity of messages and their sources.

  Our analysis of the troll army continued throughout 2014. Regardless of the foreign policy issue, troll army accounts deliberately pushed the same pro-Russian content to their targeted audience. News stories from Russian government-run outlets, along with a range of content from lesser-known conspiratorial websites, surfaced nearly simultaneously from social media personas that appeared, on the surface, to come from different geographic locations around the world. The networks looked less Syrian and more Russian every day.

  The troll armies I’d encountered gained steam through 2014 and early 2015, and their patterns became predictable and repetitive, with one interesting twist: they were more interested in American audiences. American- and European-looking accounts among the troll army gained momentum, increasing their organic following among real Americans. Heckler accounts pounced on any issue, liberal or conservative, always taking a divisive tack—fomenting divides between rich and poor, black and white, immigrants and non-immigrants, and all those harboring antigovernment angst. If a Black Lives Matter protest broke out in a U.S. city, the trolls were there, pushing a mix of true and false messages with a conspiratorial bent. Remnants of the Occupy movement or conservative gun owners—it didn’t matter to the trolls; they repurposed existing inflammatory messages or helped seed new conspiracies tearing down the U.S. government. The trolls gravitated to Russian state-sponsored outlets, and helped distribute their false news stories or manipulated truths to newly won Western audiences. Americans of all kinds lapped up this content, but the right-wing fringe groups ran with the content far more than the rest, particularly white supremacists and antigovernment groups. Still, for the first eighteen months or so, I didn’t believe the Russian influence efforts on America made much difference. That is, until the summer of 2015.

  One campaign promoted by this Russian troll network resonated more than others. Jade Helm 15 was a military exercise sponsored by U.S. Special Operations Command, held in seven southwestern states. The training exercise sought to improve coordination and operations among U.S. special operations forces, ostensibly to prepare them for overseas deployments combating terrorists.

  Right-wing conspiracy theorists thought differently about Jade Helm 15. Bloggers, Alex Jones of Infowars, celebrity columnist Chuck Norris, and even Texas governor Greg Abbott all feared that the military training exercise constituted a secret U.S. government plan to “impose martial law, take away people’s guns, arrest political undesirables, launch an Obama-led hostile takeover of red-state Texas, or do some combination thereof,” the New York Times reported.7 Russian troll networks saw fresh opportunity to further foment this division between the American public and their government. Hecklers infiltrating right-wing audiences shared and recycled conspiracy theories with unwitting Americans helping fan antigovernment flames. Russian government-run news outlets jumped into the fray, further promoting conspiracies of a Texas takeover.8 Several hundred Texas residents showed up to protest and shout down a U.S. Army officer providing a public briefing on the exercise. The conspiracy reached such heights that Governor Abbott deployed the Texas National Guard to monitor Operation Jade Helm and ensure that no federal takeover was afoot.9 Ultimately, the exercise occurred with little fanfare and was more of a public relations crisis than a threat of martial law. But the Jade Helm 15 exercise revealed to me and likely to the Russians just how easy social media influence could be with segments of the American public.

  The troll army’s operations in 2014 and 2015 indicated how Russia’s long-run investment in state-sponsored media outlets was beginning to pay big dividends. Social media provided a subtler pathway for government-sanctioned news outlets like RT and Sputnik News to grow their American viewership. RT had launched in late 2005 as a satellite news channel under the moniker Russia Today. Viewership lagged under the overt Russian banner; its content had limited reach with foreign audiences. In 2012, Russia Today deftly changed direction, shifting its focus from Russian domestic issues to Western social problems and the flaws of democracy. Programming shifts coincided with a branding switch and the name Russia Today was condensed to RT, subtly masking the channel’s sponsor.

  In 2012, RT showed the fastest growth among international news channels in the United States. It implemented a social media strategy to spread among English-speaking audiences, becoming an early adopter and highly effective disseminator on YouTube. Today, RT sustains nearly four times as many YouTube subscribers as CNN. RT’s Facebook chatter outperformed that of BBC World. RT achieved this growth, in part, by using its reporters’ and producers’ social media personas as distribution mechanisms for content that does not appear directly on its television channel, further obfuscating the Russian source. Disenfranchised Americans angered at perceived social grievances or bitter with the U.S. government, unaware that RT was sponsored by Russia, devoured this social media content, which confirmed their a priori assumptions, regardless of whether the content were true or not.

  Throughout 2015 and leading into the election year, friends shared RT news stories with me on Facebook for the first time. Sometimes I’d point out to the sender that RT was a Russian state-sponsored news outlet. This usually caught the sender off guard; they’d be surprised to find out that the English articles they were reading and sharing came from a Kremlin outlet. When jumping in a New York City cab or traveling on the subway, I began noticing RT advertisements for the first time—ads promoting notable U.S. personalities. For instance, RT’s Election Night coverage for the 2016 presidential election starred Larry King, formerly of CNN, Ed Schultz, formerly a left-leaning host on MSNBC, and former Minnesota governor Jesse Ventura, one of the world’s most notable conspiracy theorists, who regularly claims that the 9/11 attacks were the work of the U.S. government. This lineup points to the range of audiences Russia seeks to influence—left, right, Occupy, white supremacist, antigovernment, those frightened by vaccine conspiracies, and climate change advocates and deniers. RT has something for every American if it means weakening support for the U.S. government and its institutions. RT’s tagline is “Question More.” Of course, it seeks to provide not answers, but doubt. How do you know? Could it be this instead? Can you really trust the government? Isn’t the U.S. government hypocritical? The net takeaway of RT coverage is that nothing can be trusted, and if you can’t trust anyone, then you’ll believe anything.

  Social media provides the perfect avenue for disseminating content to those who want a certain type of news but don’t necessarily want to know how their news is made. Russia intelligently uses a range of nefarious state-sponsored outlets to promote its preferred narratives and quench the thirst of the conspiratorial. Russia’s second-most-prolific outlet is Sputnik International, a wire service and online news site, which offers a more sinister and spectacular version of RT’s mainstream coverage. If a claim against America seems a bit too outlandish for RT, which attempts to maintain a higher degree of credibility, Sputnik quickly steps in to proliferate the sensational. Beyond Sputnik, Russia foments chaos with the support of fringe websites. Conspiratorial sites from Moscow to America repeated Kremlin talking points and manufactured conspiracies throughout 2014 and 2015. Sensational stories built from Kremlin-conceived conspiracies pushed clickbait headlines into American Twitte
r feeds and Facebook News Feed bubbles. The more Americans clicked on these unknown websites, the more their stature rose in the mainstream media.

  Russia’s success with overt state-sponsored news and covert trolling can be credited partly to the seamless integration of its intelligence and security services. Each day, Putin’s top deputy attends RT’s daily meeting, helping direct the outlet’s themes and objectives. Some Kremlin-linked trolls receive guidance from Russian intelligence services, but others are more loosely connected. Some die-hard supporters inhale Kremlin propaganda and belch it back out into social media at what seems like an unimaginable pace. Others are likely unknowingly being manipulated. For instance, a tweet or post about a conspiracy theory, political scandal, or national emergency might suddenly receive thousands of likes, retweets, and shares, incentivizing the original user to post more content and reap social media rewards. The user may think he’s quite popular and never truly know that Russia was directing engagement at him.

  But some—the core of the troll armies, like the Internet Research Agency cited in the Mueller indictment in February 2018—clearly receive marching orders from the top of the Russian government. Housed in a seemingly corporate building is an office for Kremlin-employed trolls who drive pro-Russian themes in chatrooms, blogs, comment columns, and social media posts. Trolls have a quota they must hit each day, just as in any other job.10 Some of their articles, posts, and jabs are pithy and on target. Others appear crude and lazy, attempts to meet a deadline more than an objective.

 

‹ Prev