Book Read Free

Data Versus Democracy

Page 19

by Kris Shaffer


  40Prashant Bordia and Nicholas DiFonzo, Rumor Psychology: Social and Organizational

  Approaches (Washington, D.C.: American Psychological Association, 2017).

  41I should note that, while private chat applications make it harder to discover disinforma-

  tion operations on those applications, those applications are still indispensable to research-

  ers. That’s because we care about the privacy of our communications, and end-to-end

  encrypted messaging apps are key to avoiding surveillance from adversaries, especially

  those with the power of governments behind them. As of 2018, Signal, from Open

  Whisper Systems, is the most often recommended encrypted communication app from

  security researchers, vulnerable activists, and tech journalists.

  Data versus Democracy

  107

  Because WhatsApp is a peer-to-peer platform, oriented around sharing

  messages with individuals or small groups of friends and family, Verificado

  took a small-scale, social approach to fact-checking. They set up a

  WhatsApp account where individuals could send them information in need

  of verification. Verificado’s researchers would then respond individually

  with the results of their research. This personal touch (which one has to

  assume involved a fair bit of copy-and-paste when they received multiple

  inquiries about the same claim) was more organic to the platform and

  allowed users to interact with Verificado more like the way they interact

  with others on the platform.

  Now, in the face of a coordinated disinformation campaign during the course

  of a massive general election that involved thousands of individual races,

  cut-and-paste only scales so far. So Verificado also crafted their debunks in

  ways that would promote widespread sharing, even virality, on WhatsApp.

  Multiple times a day, they updated their public status with one of the

  debunks that was prompted by a private message they received. These

  statuses could then be shared across the platform, much like tweets or

  public Facebook posts. They also created meme-like images that contained

  the false claim along with their true/false evaluation stamped on it. This

  promoted user engagement, associated their true/false evaluation with the

  original image in users’ minds, and promoted viral sharing more than simple

  text or a link to a web article would (though, they did post longer-form

  debunks on their web site as well).42

  For the same reasons that it is difficult to study the reach and impact of

  misinformation and disinformation on WhatsApp, it’s difficult to quantify

  the reach and impact of Verificado 2018’s work. But the consensus is that

  they had a nontrivial, positive impact on the information landscape during

  a complicated, rumor-laden election cycle, and they made more progress

  on the problem of private, viral mis-/disinformation than just about anyone

  else to date. They even won an Online Journalism Award for their

  collaboration. 43

  Peer-to-peer disinformation isn’t going away. As users are increasingly

  concerned about privacy, surveillance, targeted advertising, and harassment

  on social media, they are often retreating to private digital communication

  among smaller groups of people close to them. For many, the social media

  honeymoon is over, the days of serendipitous global connection gone. Safety,

  security, and privacy are the new watchwords. In some cases, this means

  42Owen, “WhatsApp is a black box for fake news.”

  43“AJ+ Español wins an Online Journalism Award for Verificado 2018,” Al Jazeera, published

  September 18, 2018, https://network.aljazeera.com/pressroom/aj-español-wins-

  online-journalism-award-verificado-2018.

  108

  Chapter 6 | Democracy Hacked, Part 2

  less exposure to rumors and psychological warfare. In other cases, it simply

  means those threats are harder to track. This is by no means a solved

  problem, but examples like Verificado 2018 give us hope that solutions are

  possible and that we might already have a few tricks up our collective sleeves

  that can help.

  Summary

  In this chapter, we’ve explored recent online disinformation operations in the

  Global South. From Latin America to Northern Africa to Southeast Asia, we

  have seen the way that social media platforms have amplified rumors,

  mainstreamed hate speech, and served as vehicles for psychological warfare

  operations. In some cases, this misinformation and disinformation has fueled

  not only political movements and psychological distress but has also motivated

  offline physical violence and even fanned the flames of ethnic cleansing and

  genocide.

  The problem of online disinformation is bigger and more diverse than many in

  the West realize. It’s bigger than “fake news,” bigger than Russia and the

  American alt-right, bigger than the Bannons and the Mercers of the world,

  bigger than Twitter bots, and even bigger than social network platforms. As

  long as there has been information, there has been disinformation, and no

  society on our planet is immune from that. This is a global problem, and

  a human problem, fueled—but not created—by technology. As we seek to

  solve the problem, we’ll need a global, human—and, yes,

  technical—solution.

  C H A P T E R

  7

  Conclusion

  Where Do We Go from Here?

  Information abundance, the limits of human cognition, excessive data mining,

  and algorithmic content delivery combine to make us incredibly vulnerable to

  propaganda and disinformation. The problem is massive, cross-platform, and

  cross-community, and so is the solution. But there are things we can do—as

  individuals and as societies—to curb the problem of disinformation and secure

  our minds and communities from cognitive hackers.

  The Propaganda Problem

  Over the course of this book, we’ve explored a number of problems that

  leave us vulnerable to disinformation, propaganda, and cognitive hacking.

  Some of these are based in human psychology. Confirmation bias predisposes

  us to believe claims that are consistent with what we already believe and

  closes our mind to claims that challenge our existing worldview. Attentional

  blink makes it difficult for us to keep our critical faculties active when

  encountering information in a fast-paced, constantly changing media

  environment. Priming makes us vulnerable to mere repetition, especially when

  we’re not conscious of that repetition, as repeated exposure to an idea makes

  it easier for our minds to process, and thus believe, that idea. All of these

  traits, developed over aeons of evolutionary history, make it easy for us to

  form biases and stereotypes that get reinforced, even amplified, over time,

  without any help from digital technology.

  © Kris Shaffer 2019

  K. Shaf fer, Data versus Democracy,

  https://doi.org/10.1007/978-1-4842-4540-8_7

  110

  Chapter 7 | Conclusion

  Some of the problems are technical. The excessive mining of personal data,

  combined with collaborative filtering, enables platforms to hyper-target users

  with media that encourages online “engagement,” b
ut also reinforces the

  biases that led to that targeting. Targeted advertising puts troves of that user

  data functionally—if not actually—at the fingertips of those who would use it

  to target audiences for financial or political gain.

  Some of the problems are social. The rapidly increasing access to information

  and people that digital technology affords breaks us out of our pluralistic

  ignorance, but we often aren’t ready to deal with the social implications of

  information traveling through and between communities based primarily on

  what sociologists call weak ties.

  When information abundance, human psychology, data-driven user profiling,

  and algorithmic content recommendation combine, the result—unchecked—

  can be disastrous for communities. And that appears to be even more the

  case in communities for whom democracy and (relatively) free speech are also

  new concepts.

  Disinformation is fundamentally a human problem. Yes, technology plays its

  part, and as argued earlier in this book, new technology is neither inherently

  bad nor inherently good nor inherently neutral. Each new technology has its

  own affordances and limitations that, like the human mind, make certain

  vulnerabilities starker than others. But ultimately, there is no purely technical

  solution to the problem. Disinformation is a behavior, perpetrated by people,

  against people, in accordance with the fundamental traits (and limitations) of

  human cognition and human community. The solution necessarily will be

  human as well.

  Of course, that doesn’t mean that the solution will be simple. Technology

  changes far more rapidly than human biology evolves, and individuals are

  adopting new technologies faster than communities are adapting to them.

  Lawmakers and regulators are probably the furthest behind, as many of the

  laws that govern technology today—in the United States, at least—were

  written before the advent of the internet.1 Perhaps most striking, though, is

  the surprise that many of the inventors of these technologies experience

  when they witness the nefarious ways in which their inventions are put to use.

  If the inventors, whose imagination spawned these tools, can’t envision all of

  the negative ends to which these technologies can be directed, what chance

  do users, communities, and lawmakers have?!

  1Key U.S. laws written before the advent of the internet include the Computer Fraud and

  Abuse Act (1984), the Federal Educational Rights and Privacy Act (1974), and, for all prac-

  tical purposes, the Health Insurance Portability and Accountability Act (1996).

  Data versus Democracy

  111

  The solutions may not be simple, and we may not be able to anticipate and

  prevent all antisocial uses of new technology, but there are certainly things we

  can do to make progress.

  Consider the bias-amplification flow chart from Chapter 3. It is tuned primarily

  for search engines, but it applies to most platforms that provide users with

  content algorithmically. Each of these elements represents something that bad

  actors can exploit or hack. But they each also represent a locus of resistance

  for those of us seeking to counter disinformation.

  For example, the Myanmar military manipulated existing social stereotypes to

  encourage violence against the Rohingya people and dependence on the

  military to preserve order in the young quasi-democratic state. Using

  information about those existing stereotypes, they created media that would

  exacerbate those existing biases, encouraging and amplifying calls to violence.

  They not only directly created pro-violence media and amplified existing calls

  to violence, they also affected the content database feeding users’ Facebook

  timelines and created content that encouraged user engagement with the

  pro-violence messages. Thus, the model, which took that content and user

  activity history as inputs, further amplified the biased media delivered to users

  in their feeds. And then the cycle began again.

  Similar cycles of bias amplification have led to increased political polarization

  in the United States, as we explored in Chapters 4 and 5. Even the perpetrators of harassment during GamerGate, many of whom we would likely consider to

  be radicalized already, engaged in a game of trying to outdo each other in

  engagement, victim reaction, or just plain “lulz.” Since content that hits the

  emotions hardest, especially anger, tends to correlate with stronger reactions,

  it’s not surprising that the abhorrence of GamerGate accelerated furiously at

  times, as GamerGaters sought to “win” the game.

  Users can counter this vicious cycle, in part, by being conscious of their own

  personal and community biases and engaging in activity that provides less

  problematic inputs to the model. I don’t mean employing “bots for good” or

  “Google-bombing” with positive, inspirational messages. I’m a firm believer

  that any manipulative behavior, even when employed with good intentions,

  ultimately does more social harm than social good. Rather, what I mean is

  using one’s awareness of existing individual and community biases and making

  conscious choices to resist our “defaults” and the biases they represent.

  For example, one of my digital storytelling students was a young woman of

  color who realized that when she chose visual media for her blog posts, she

  was choosing the “default” American image—one that was very white- and

  male-oriented. So she resolved in her future projects to include images that

  represent her own demographic, doing a small part to bring the media

  landscape more in line with the diversity that is actually present in our society.

  Remember, it’s the defaults—both mental and algorithmic—that tend to

  112

  Chapter 7 | Conclusion

  reinforce existing bias. So by choosing non-defaults in our own media creation

  and consumption, and by being purposeful in the media we engage with, we

  can increase the diversity, accuracy, and even justice of the content and user

  activity that feeds into content recommendation models.

  However, individual solutions to systemic problems can only go so far. But as

  Cathy O’Neil argues in her book Weapons of Math Destruction, the same data,

  models, and algorithms that are used to target victims and promote injustices

  can be used to proactively intervene in ways that help correct injustices. 2

  Activists, platforms, and regulators can use user activity data and content

  databases to identify social biases—even unconscious, systemic biases—and

  use those realizations to trigger changes to the inputs, or even the model

  itself. These changes can counter the bias amplification naturally produced by

  the technology—and possibly even counter the bias existing in society. This is

  what Google did to correct the “Did the holocaust happen?” problem, as well

  as the oppressive images that resulted from searches for “black girls” that we

  discussed in Chapter 3.

  Now, such corporate or governmental approaches to rectifying social

  injustices lead quickly to claims of censorship. Intervening in the content and

  algorithmic recommendations will simply brin
g in the bias of programmers, at

  the expense of the free speech of users, making the platforms the biased

  arbiters of truth and free expression. This is a very real concern, especially

  seeing how often platforms have failed when they have attempted to moderate

  content. But if we begin from the realization that content recommendation

  engines amplify existing social bias by default, and that that bias amplification

  itself limits the freedom of expression (and, sometimes, the freedom simply

  to live) of certain communities, it can give us a framework for thinking about

  how we can tune algorithms and policies to respect all people’s rights to life,

  liberty, and free expression. Again, it’s not a simple problem to solve, but as

  long as doing nothing makes it worse (and current data certainly suggests that

  it does), we need to constantly reimagine and reimplement the technology we

  rely on in our daily lives.

  Take Russia’s activity in 2016 as another example. In contrast to the Myanmar

  military, much of their activity began with community building. They shared

  messages that were in many ways innocuous, or at least typical, expressions of

  in-group sentiment. This allowed them to build large communities of people

  who “like” (in both the real-world sense and the Facebook sense) Jesus,

  veterans, racial equity, Texas, or the environment. The more poignant attacks

  and more polarizing messages often came later, once users had liked pages,

  followed accounts, or engaged regularly with posts created by IRA “specialists.”

  The result was a “media mirage,” and in some sense a “community mirage,”

  2Cathy O’Neil, “Weapons of Math Destruction: How Big Data Increases Inequality and

  Threatens Democracy,” (New York: Broadway Books, 2017), p. 118.

  Data versus Democracy

  113

  where users’ activity histories told the content recommendation model to

  serve them a disproportional amount of content from IRA-controlled

  accounts or representing Kremlin-sympathetic views. This not only gave the

  IRA a ready-made audience for their fiercest pro-Trump, anti-Clinton, and

  vote-suppression messages as the election neared. It also meant that users

  who had engaged with IRA content were more likely to see content from real

 

‹ Prev