by Kris Shaffer
101
hands of the Buddhist majority was particularly facilitated by increased digital
connectivity. As a report from the Brookings Institution wrote, “the sudden
rollback of authoritarian controls and press censorship—along with the rapid
expansion of internet and mobile phone penetration—opened the floodgates
to a deluge of online hate speech and xenophobic nationalism directed at the
Rohingya.” 34
The hate speech and the physical violence it spawned were not unknown
problems, even in the early days. The first wave of social media–enhanced
violence came in 2014. McLaughlin writes that “the riots wouldn’t have
happened without Facebook.” 35 Much like the Arab Spring uprisings that
brought down governments, the increased connectivity in Myanmar pushed
tensions over a critical threshold, making the large-scale riots of 2014
possible. McLaughlin also writes that Facebook “had at least two direct
warnings” of rising hate speech potentially leading to violence before those
2014 riots, but because the company saw Myanmar as a “connectivity
opportunity,” they dismissed the rising hate speech on their platforms and
the notion that it could incite violence in “real life.” That said, in 2013, the
group Human Rights Watch also dismissed the idea that rising hate speech
on Facebook was a significant problem. For most people, the primary human
rights issue was one of increasing access to digital technology, as well as the
social, educational, economic, and political benefits that such access would
surely bring.
Of course, that’s not what happened. Myanmar’s citizens, so used to a
combination of state-run propaganda and local rumor mills, did not immediately
use social media to supplant state-run propaganda outlets with balanced and
nuanced journalism. Rather, they largely transferred their participatory roles
from the information economy they knew into this new platform—in other
words, they made Facebook into a large-scale version of the local rumor mills
to which they were accustomed to contributing. In a country plagued by such
deep-seated social, political, and religious tension, it is no surprise—at least in
hindsight—that the result was an increase in disinformation, misinformation,
and hate speech that contributed to offline physical harm as well.
This dearth of digital literacy, propensity toward spreading and believing
rumor, and deep-seated ethnic tension served as the backdrop for the
psychological warfare that the Myanmar military would use against its own
people.
34Brandon Paladino and Hunter Marston, “Facebook can’t resolve conflicts in Myanmar
and Sri Lanka on its own,” Order from Chaos, Brookings, www.brookings.edu/blog/
order-from-chaos/2018/06/27/facebook-cant-resolve-conflicts-in
-myanmar-and-sri-lanka-on-its-own/.
35McLaughlin, “How Facebook’s Rise Fueled Chaos and Confusion in Myanmar.”
102
Chapter 6 | Democracy Hacked, Part 2
The rumors and the hate speech that encouraged violence against the
Rohingya didn’t just come from Buddhist citizens, many of them came from
the Myanmar military. As Paul Mozur reported for The New York Times, there
was a “systematic campaign on Facebook that stretched back half a decade”
in which “Myanmar military personnel … turned the social network into a
tool for ethnic cleansing. ”36 Hundreds of military personnel created fake
accounts on Facebook and used them to surveil citizens, spread disinformation,
silence critics of the government, stoke arguments between rival groups, and
post “incendiary comments” about the Rohingya.
While this operation primarily took place on Facebook, it was multifaceted.
In addition to incendiary text, the Myanmar military also used images,
memes, and peer-to-peer messages, including digital “chain letters” on
Facebook Messenger. One thing that did not appear prominently in this
operation is automation. Like in the Philippines, there was no significant
presence of bots. A literal army of trolls waged information warfare by
means of digital sockpuppets in real time. The similarity to the operations
perpetrated by Russia’s Internet Research Agency during the 2016 U.S.
elections is hardly coincidental. Though there is no evidence that Russia
was involved in the Myanmar military’s operation, there is evidence that
Myanmar military officers traveled to Russia and studied their information
warfare tactics. 37
The military’s primary target in this psychological warfare operation—or
PsyOp—was the Rohingya people. But in 2017, some of their operations
aimed at both sides, spreading disinformation to both Buddhists and Muslims,
telling each that an attack was immanent from the other side. According to
Mozur, “the purpose of the campaign … was to generate widespread feelings
of vulnerability and fear that could be solved only by the military’s protection.”
The fledgling, young democracy posed a threat to the military that until very
recently ran the country. They took advantage of the people’s general lack
of digital media literacy and their inexperience with democratic free
expression, and turned that vulnerability into a weapon aimed at their own
people. And by targeting the Rohingya, who were already the victims of
misinformation and hate speech online, they threw fuel on an existing fire,
contributing significantly to one of the largest humanitarian crises of the
twenty-first century.
36Paul Mozur, “A Genocide Incited on Facebook, With Posts From Myanmar’s Military,”
The New York Times, published October 15, 2018, www.nytimes.com/2018/10/15/tech-
nology/myanmar-facebook-genocide.html.
37Ibid.
Data versus Democracy
103
Success in the Latin American
Elections of 2018
Summer 2018 was a busy time for Latin American politics. There were
presidential elections in Colombia and Mexico, political theater masquerading
as an election in Venezuela, and major unrest in Nicaragua, which was riddled
with anti-government protests. I spent the summer primarily focused on
monitoring and reporting on the elections in Colombia and Mexico and
studying the Latin American (dis)information landscape more broadly. We
observed nothing at the scale of what took place in the United States, the
Philippines, or Myanmar—likely (hopefully!) because the social platforms have
learned some lessons and started to implement changes that make it harder
(though by no means impossible) to conduct effective disinformation
operations. However, a counter-trend also emerged, where instead of one
bad actor launching a campaign for or against a particular candidate, party, or
people group, we observed many smaller operations in service of a variety of
candidates and interest groups. We also observed a new trend that may
explain the smaller-scale operations on Twitter and Facebook, but also poses
a major threat to democracies in the near future: peer-to-peer messaging.
Let’s unpack these trends.
First, the social networks. As my colleagues and I summarize in our postelection
r /> report:
We analyzed content broadly collected on Twitter via key search
terms, as well as more selective targets on Facebook. Overall, we
found evidence of nine coordinated networks artificially amplifying
messages in favor of or against a candidate or party in Mexico, and
two coordinated networks in Colombia. While the volume of coor-
dinated information operations in Colombia was noticeably lower,
one of those operations was international in focus, aimed at stoking
anti-government sentiment and action in Colombia, Mexico,
Nicaragua, Venezuela, and Catalonia. While some of these networks
were difficult to attribute, we traced some back to the responsible
person(a)s, and at least two of them show indications of foreign
involvement.38
Most of these networks were automated networks with little sophistication
and, as far as we could tell, little impact. In fact, one of them, which was
only online for roughly 48 hours, demonstrates the progress that the
social platforms have made in detecting and removing botnets. This was a
network of 38 Twitter accounts, controlled by an individual we traced on
38“2018 Post-Election Report: Mexico and Colombia,” New Knowledge, accessed December 1,
2018, www.newknowledge.com/documents/LatinAmericaElectionReport.pdf.
104
Chapter 6 | Democracy Hacked, Part 2
Facebook, each posting hundreds of messages per day that were taken
from a small library of posts to choose from, resulting in hundreds of
identical copies of each of these posts across the network. These messages
were always in opposition to Ricardo Anaya, the presidential candidate
who ultimately finished second, or in support of long-shot candidate Jaime
Rodríguez Calderón, nicknamed “El Bronco.” We dubbed this botnet the
Bronco Bots.
The Bronco Bots were very short-lived. We discovered them almost
immediately and reported them to Twitter, and within hours of our report—
and just two days after they came online—they were suspended. It may or
may not have been our report that resulted in the suspensions. In fact, Twitter
has gotten much better since 2016 at identifying mass automation and
suspending the accounts independent of any reports from users or researchers.
That’s obviously a good thing. But we discovered several networks similar to
the Bronco Bots, supporting a variety of candidates, and only some of them
were suspended by the platform before the election (if at all). It’s also worth
noting that in the course of an event unfolding in real time—like the Unite the
Right rally in August 2017 or the chemical weapons attack in Syria in April
2017—a network of bots or fake accounts can do a lot of damage in less than
48 hours, especially if they are amplifying a (false) narrative being pushed by
real people or by nonautomated fake accounts controlled by real people. For
example, we uncovered another anti-Anaya network on Twitter using
automation to amplify a set of YouTube videos advancing already debunked
rumors about alleged corruption from Anaya. By targeting the evening of and
the day after the final presidential debate, they were able to conduct their
operation without worry of account suspensions interfering until attention
had already organically dwindled.
A more insidious, and longer-lasting, botnet arose to stoke anti-government
sentiment, and even promote violence, across Latin America and elsewhere.
The accounts in this network were generally created a few days or weeks
before operations began and primarily pushed anti-government messages to
targeted audiences in Venezuela, Colombia, and Nicaragua, with links to
content on YouTube and two recently created web sites.
The account profiles were all variations on a theme, and locations given in the
account profiles appeared to be fake (in some cases, the city-country
combinations they claim as home simply do not exist). Though these accounts
claimed to be different individuals in different countries, the content posted
to these accounts was often identical and always fast-paced and high-volume.
In addition to anti-government messaging in Venezuela and Nicaragua, these
accounts actively attempted to link Colombian presidential candidate, Gustavo
Petro, to the FARC terrorist organization. FARC and Colombia were at war
until an unpopular peace deal was signed in 2016. Petro’s support for the
peace deal, combined with his leftist politics, left him open to characterizations
of being a communist or a terrorist, which this botnet seized upon.
Data versus Democracy
105
(Petro ultimately lost the runoff election to Iván Duque, a right-wing populist
and critic of the Colombia-FARC peace agreement.) While most posts from
this botnet were in Spanish, a few English tweets slid through now and then.
These included links to tech tutorials and a site that focuses on sensitive
social issues in the United States.
And occasionally Twitter automation and analytics tools (oops).
It is clear that the individual or group behind this particular botnet in Latin
America was taking steps to mask their location. There were also indications
that they may not be from Latin America, such as the accidental posting of
English-language content and the lumping together of target audiences
speaking the same language, but living in different countries—even continents,
with the inclusion of posts targeting users in Catalonia. This may indeed have
been a foreign influence operation. But unfortunately—or is it fortunately?—
the network was taken down by Twitter before we could make a high-
confidence origin assessment.
The biggest threat in Latin America wasn’t Twitter, though. It also wasn’t
Facebook, or Instagram, or YouTube. It was peer-to-peer messaging, primarily
on the Facebook-owned platform, WhatsApp.
WhatsApp is a messaging service that supports text, voice, and video calling,
as well as file sharing, over an encrypted data connection. For some people, it
represents a less public (and less surveilled) way to connect with friends and
family than Facebook or Twitter. For others it provides voice, video, and text
all over wifi or a data connection, saving them money on their monthly phone
bills. Regardless of the reason, WhatsApp is becoming increasingly popular in
some parts of the world, particularly the Spanish-speaking world. In many
places where Facebook usage is on the wane, and Twitter never really made
a splash, WhatsApp is alive with digital communities and information sharing.
For instance, according to Harvard’s Nieman Lab, WhatsApp is the most
popular social platform in Mexico.39
WhatsApp and other private messaging apps (like Signal, Facebook Messenger,
Slack, even good-old-fashioned text messaging and email) pose a significant
challenge for disinformation researchers and fact-checkers. Because of the
high level of connectivity for many who use the app, it is easy for both true
and false information to spread, even to become viral, on WhatsApp. But
because the messages are private and encrypted, there is no easy way to see
what is trendi
ng on WhatsApp and in what communities.
39Laura Hazard Owen, “WhatsApp is a black box for fake news. Verificado 2018 is making
real progress fixing that.,” Nieman Lab, published June 1, 2018, www.niemanlab.
org/2018/06/whatsapp-is-a-black-box-for-fake-news-verificado-
2018-is-making-real-progress-fixing-that/.
106
Chapter 6 | Democracy Hacked, Part 2
As we’ve already explored, psychologists and rumor experts DiFonzo and
Borgia identify four primary factors that contribute to whether or not
someone believes a claim that they encounter:
• The claim agrees with that person’s existing attitudes
(confirmation bias).
• The claim comes from a credible source (which on social
media often means the person who shared it, not the
actual origin of the claim).
• The claim has been encountered repeatedly (which
contributes to perceptual fluency).
• The claim is not accompanied by a rebuttal.40
Perhaps the biggest obstacle to countering disinformation and misinformation
on private chat apps like WhatApp is that we don’t know what claims are
encountered repeatedly (going viral), what the sources of those claims are, or
what audiences they are reaching (and what biases they already hold), and so
fact-checkers don’t know what needs rebutting. Because of these obstacles,
when I talk to researchers and policymakers concerned with Latin America,
peer-to-peer messaging is their biggest fear. And having seen the anti-surveillance
writing on the wall, it’s a growing fear among researchers and policymakers in
countries where Facebook, Instagram, and Twitter still dominate, too. 41
But all hope is not lost. One initiative that took place during the 2018 Mexican
election proved that it is possible to expose and rebut rumors and
misinformation on private communication apps like WhatsApp. That initiative
was called Verificado 2018.
Verificado 2018 was a collaboration between Animal Político, Al Jazeera, and
Pop-Up Newsroom, supported by the Facebook Journalism Project and the
Google News Initiative. Their goal was to debunk false election-related claims
and to do so in ways that are in line with how users use the platforms the
claims are on. Because of WhatsApp’s dominance in the Mexican social media
landscape, Verificado focused heavily on operations there.