The Hybrid Media System
Page 45
It is impossible to say with certainty whether this Facebook “voter suppression” strategy worked, or indeed how it weighed alongside the myriad variables that determine electoral success. We do know that, when compared with Obama in 2012, support for Clinton declined among African Americans, due to an increase in the numbers who did not vote. When coupled with the increase in rural, white, working-class voters who supported Trump, this small drop in turnout may have made a difference, especially in states such as Michigan, Wisconsin, and North Carolina (Ben-Shahar, 2016; Tyson & Maniam, 2016). But, as with any election, we also know that there were potentially many other reasons that Clinton was less popular than Obama with the Democratic Party’s voter base. And in some states, the introduction of new photo ID voter registration requirements has deterred ethnic minorities from voting (Hajnal, et al., 2017).
Still, the intensification of Facebook advertising and the willingness to hire companies like CA were significant departures from the campaign models of 2008 and 2012. Over recent years, Facebook has refined the suite of tools it makes available to advertisers. These have been taken up with relish by campaigns, with the assistance of paid consulting firms. This represents something of a shift away from the use of the email microtargeting that became the gold standard from 2004 onward. For campaigns, the advantage of Facebook ads are obvious. Emails often become trapped in spam filters. Dark post ads do not, and they appear in a user’s news feed alongside the rest of his or her daily diet of content. They also come with the data on user preferences that Facebook provides, in close collaboration with campaigns.
In the future, the advantages of layering psychometric data into models based on demographics and other data sources will become obvious, but only if a campaign is able to develop and test sound hypothetical correlations between personality types and support for a candidate’s messages. Given that Facebook has developed its advertising platform into a suite of tools that enable rapid and large-scale experimental testing of many thousands of ad variations, psychometric data may begin to feed into these processes. Nevertheless, this is still a highly labor-intensive approach that requires a team of staff to develop and test the ad variations, and there is little evidence that it played a major role in 2016.
The more important point is that 2016 marked a departure in other ways. From the 2004 election onward, it became a truism that campaigns raise their money online but spend it on targeted television advertising (Anstead & Chadwick, 2009; see also chapter 6; Chadwick, 2006: 162–167). At times this acted as a brake on the growth of campaigns’ digital teams. Trump moved away from this model and invested in a large digital staff to support his Facebook operation. This is a shift that signaled a rebalancing of older and newer media logics. Television was essential to Trump’s campaign, but television advertising was not; yet Facebook advertising, integrated with Trump’s television appearances, was.
Fakes, Bots, and Hacks: Dysfunctional Hybridity
On November 5, 2016, three days before election day, the state of Colorado’s largest and most respected newspaper, the Denver Post (founded 1892), published an extraordinary article on its website. Written by reporter Eric Lubbers, the piece was titled “There Is No Such Thing as the Denver Guardian, Despite That Facebook Post You Saw.” Lubbers’s opening line was “The ‘Denver Guardian’ is not a real news source and definitely isn’t Denver’s oldest news source” (Lubbers, 2016).
The Denver Post was alerting its readers to a website, denverguardian.com, that contained an article with the headline, “FBI Agent Suspected in Hillary Email Leaks Found Dead in Apparent Murder-Suicide.” The denverguardian.com article claimed that an FBI employee who had been involved in the FBI investigation into Hillary Clinton’s use of a private email server during her time at the State Department had been found dead in a house fire in Maryland. The article was entirely fabricated. There was no such thing as the “Denver Guardian.” The site’s domain name had been registered in July 2016. There were no other news articles on the site, the address listed on the site was for a parking lot, and the image used had been taken from a random Flickr account.
And yet, the denverguardian.com article looked convincing. Aside from the presence of a few uncompleted sections of the site’s Wordpress template, to many readers clicking through from elsewhere, this could easily have passed for legitimate professional journalism. It conformed to journalism’s genre. It was not written in a sensationalist style, it was properly punctuated, had links to other sources, and (fabricated) quotations from the police and from FBI director James Comey. This made-up article had the look and feel of online journalism in the year 2016, even down to the holding line it bore to convey the thrill and immediacy of real-time online news: “this is a developing story.” The article was shared more than half a million times on Facebook, exposing large numbers of individuals to fabricated information. Over ten days, the article received 1.6 million views (Sydell, 2016).
The fake article was the product of a team of twenty-five writers employed by a company called Disinfomedia. Owned by a Los Angeles–based businessman, Jestin Coler, Disinfomedia made between $10,000 and $30,000 a month during the 2016 campaign—from the advertisements on denverguardian.com and a mini-empire of similar lookalike news sites with domains such as usatoday.com.co and washingtonpost.com.co. Many of these ads were placed with the easy-to-use Google AdSense platform that allows website owners to generate income from display ads (Sydell, 2016). This is just one example of dysfunctional hybridity in 2016.
By “dysfunctional hybridity,” I mean processes in which the interdependence among older and newer media logics may contribute to the erosion of democratic norms. The fake news of 2016 depended on a combination of media affordances and systemic trends: the design of social media platforms and search engines, and the intense competitive pressure on professional journalism caused by the digitalization of news and the acceleration of news cycles.
Yet the problem of fake news was just one of a broader set of problems. The 2016 campaign also saw two further threats to democratic norms: the rise of technologically enabled, automated social media bot (software robot) interventions, and politically motivated hacking. These three developments—fake news, social media bots, and politically motivated hacking—are the dark frontier of the hybrid media system. They could not exist without some of the incentive structures and media affordances that now shape political communication.
FAKE NEWS AS FABRICATED NEWS
There is much to be said about the fake news scandal of 2016, and this is not the place for a comprehensive analysis. In many respects, like the role played by Facebook advertising in Trump’s campaign, the influence of fake news on the outcome of the election is not easily identifiable. Indeed, this is a troubling aspect of its emergence. But let us consider how the hybrid media system enabled its rise.
When using the term fake news in the context of the 2016 campaign, it is important to be precise. Ideological bias, sensationalism, exaggeration, satire, and even simple fabrication have always been a part of the professional news industry and the internet more broadly. Equally, the argument that tabloid newspapers’ regular output of celebrity gossip is fake news does not help much in determining what was new in 2016. Matters are further complicated by the fact that, over recent years, a raft of satirical online news sites has emerged, like the Daily Currant, whose business model is based on generating ad revenues by running humorous invented articles (Rensin, 2014). And lumping together news sites that are simply ideologically biased with sites that are based on fabricated articles (Albright, 2016) may also obscure the real problem. Conservative news sites may run plenty of slanted, exaggerated stories containing material recycled from other sites, but they are not the same thing as fake news.
As we shall see, however, conservative sites were certainly important enablers of the creation of fake news. The key point is that in 2016 there was a broader systemic problem that cannot be reduced to the mere existence of a network of right-wing si
tes. With all of this in mind, here I define fake news as follows: the exploitation of the technological affordances and incentive structures of social media platforms, online search engines, and the broader news media industry to spread fabricated information for financial and/or political gain. Put more bluntly, the fake news problem of 2016 was a hybrid media hack. Let us unpack how it worked by examining the most startling development: the so-called Macedonian “news factory.”
During the summer of 2016, a group of young people based in the small town of Veles, which lies at the center of the Former Yugoslav Republic of Macedonia, registered more than 150 web domain names. They then proceeded to populate these domains using free templates from the well-known blogging platform WordPress. The domain names, for example USConservativeToday.com, USADailyPolitics.com, NewYorkTimesPolitics.com, and DonaldTrumpNews.co, were designed to look like the sites of news organizations based in the United States. The youngsters in Veles proceeded to fill these sites with pro-Trump news articles that they thought would go viral. The idea was that Trump supporters would share them to signal solidarity, while Clinton supporters would share them to signal outrage.
Many of the news factory’s articles were copied and pasted, with key modifications, from a wide range of conservative and even mainstream online news sites in the United States. But many of the articles were entirely made up (Silverman & Alexander, 2016; Subramanian, 2017; Tynan, 2016). The fact that a network of conservative tabloid-style sites, such as Breitbart, Daily Caller, The Blaze, Infowars, Ending the Fed, and the Washington Examiner, had grown their audiences since the 2012 election was an important enabling force. Without this raw material, it would have been much more difficult to get the news factory up and running.
The next stage saw the fake news creators sign up to Google AdSense. AdSense is the ad syndication platform that allows website owners to make revenue based on the number of page impressions and clicks that a site receives. Once the AdSense code was embedded on each site, the fake news creators posted links to individual news articles on the Facebook pages of multiple American political groups, including popular conservative groups with hundreds of thousands of members, such as My America My Home, and Friends Who Support President Donald J. Trump. They posted the links in their own names but, to speed up the process and escape detection, they also posted using hundreds of fake Facebook profiles that they had purchased online for about 50 cents each. They even bought Facebook ads to ensure that their posts appeared in users’ feeds.
With the WordPress sites populated with pro-Trump news articles, the AdSense account set up, the site code embedded, and the links seeded to Facebook, the final step was to sit back, wait for the articles to be shared by Facebook users, and watch the AdSense revenue roll in from the page impressions and clicks generated by site visits. AdSense generates tiny amounts of money, only fractions of a cent, per page impression. But given how many shares, reactions, and comments these articles received, it was possible for the Veles youngsters to generate enough clicks to earn substantial sums—around $4,000 a month in a country where the average monthly salary is $371—mostly from American citizens eager to circulate articles in their social networks.
In some cases, the amount of engagement these articles received was truly extraordinary. For example, a fake article on the site ABCNewscom.com, “Obama Signs Executive Order Banning the Pledge of Allegiance in Schools Nationwide,” generated 2.17 million Facebook shares, comments, and reactions.5 Unlike the Disinfomedia project that spawned the fake Hillary Clinton FBI agent story, the Macedonian fake news factory does not appear to have been driven by ideological goals. It was concerned only with generating income from the hall of mirrors enabled by Facebook’s and Google’s platforms. Of course, the factory’s role in shaping the structure of attention during the campaign may well have been ideological in its effects. This is an important avenue of future research.
To explain how and why fake news came to exist in this form, we need to situate the Veles news factory in a broader web of systemic interdependencies.
The underlying web technologies that make news sites look and function the way they do have become radically democratized. Blog template platforms such as WordPress look remarkably authoritative when placed alongside even the slickest elite media organizations’ sites, many of which, of course, also contain blogs.
The news factory was based on a mix of plagiarized bits and pieces of other articles, spliced together with added fake images and headlines, as well as outright fabrication. Much of the raw material and many of the genres of these articles stemmed from journalism produced by the network of right-wing news sites that were an important part of the conservative movement in 2016. These sites played a broader role in galvanizing support for Trump, as well as influencing the elite media agenda during the campaign (Benkler, et al., 2017).
Facebook had been important for the growth of these right-wing sites after 2008. Facebook’s advertising revenue model, as a platform, is based upon sharing, particularly sharing among family members and like-minded networks of individuals. Google’s ad syndication platform incentivizes advertisers to create content targeted to specific online audiences (in this case partisans), and that ad platform is often blind to the authenticity of the websites whose audiences it sells to its advertisers.
Fake news also worked because, in a bitterly polarized partisan struggle, supporters of Trump and Clinton wanted to generate solidarity by sharing news that they hoped would show their opponents in the worst possible light. By 2016, some 67 percent of the American public (and 44 percent of the adult population) reported getting their news from Facebook (Pew Research Center, 2016c).
In addition, the mobile internet has altered how we consume the news, transforming news cycles based on a couple of deadlines a day into political information cycles driven by constant real-time interventions by journalists, bloggers, politicians, activists, and ordinary members of the public eager, and able, to share these interventions in their own social media networks.
The new, digitally native, news organizations such as Buzzfeed, Vice News, and the Huffington Post have learned to compete in this environment by crafting articles with sensationalist click-bait headlines and attention-grabbing images that provide advertisers with evidence of user “engagement.” To shore up revenue, many reputable media organizations, such as the Washington Post, the UK’s Guardian, and CNN, to name just a few, participate in the ad syndication game themselves, hiring “content recommendation” companies like Outbrain, Taboola, and Revcontent, which dump algorithmically generated ads and poor-quality click-bait stories “from around the web” at the bottom of news article pages. And both digitally native and pre-digital news organizations now heavily depend on Facebook for generating traffic to their websites.
Finally, Facebook and Google have grown to become vast, sprawling megaplatforms that are woven into the fabric of the web in countless ways that are often unclear to those outside the arcane worlds of online analytics and marketing.
This panoply of affordances and incentive structures—some from older media, some from newer media—is not likely to be amenable to a quick technological fix.
SOCIAL MEDIA BOTS AND THE TELEVISED DEBATES
A second threat to democratic norms emerged in 2016 in the form of what Phil Howard and his colleagues have termed “computational propaganda” (see also Ferrara, et al., 2016; Kollanyi, et al., 2016a, 2016b). Like fake news, this is a relatively recent development at the hybrid media system’s dark frontier.
In liberal democratic contexts, the changing nature of political media events such as televised candidate debates is essential to understanding the significance of computational propaganda.6 Over the last five years, dual screening—the bundle of practices that involve integrating, and switching across and between, live broadcast media and social media—has become a well-established feature of media events (Chadwick, et al., 2017; Vaccari, et al., 2015; see also chapter 3). These new practices are reshaping
political agency, and the effects are scaling up to alter the structure of communication relating to televised campaign debates. Debates are now characterized by competition, conflict, and partisanship, but also interdependence, among actors who attempt to steer the flow and meanings of debate-related news. Journalists and politicians have integrated social media into their working practices. Broadcasters commission social media sentiment analysis, real-time online polls, and present vox-pop tweets from the viewing public to provide a demotic presence in the studio and post-event “spin room.” However, the power of political staff and journalists is increasingly prone to disruption by social media user-audience networks.
As dual screened debates have become more popular, the stakes have grown. The 2016 campaign revealed that a surprising proportion of the social media discourse generated during the televised campaign debates was inauthentic, the product of automated and semi-automated social media bots whose masters sought to shape perceptions of the debate. Why does this matter?
Like the fake news phenomenon, the growth of political bots had been produced by a confluence of specific social media platform affordances and some of the incentive structures that guide political actors and journalists in the hybrid media system. People use social media to acquire information and news about the campaign, to share information and opinions with others, and to try to influence the interpretive framing of their online followers, journalists, and politicians (Chadwick, 2011a; Chadwick, et al., 2017; Freelon & Karpf, 2014; Mascaro & Goggins, 2015). They evaluate and fact-check television presenters and try to place marginalized issues on reporters’ agendas. They create and circulate specific hashtags, send publicly accessible tweets to journalists and campaign elites, craft satirical posts in attempts to generate shareable memes and viral information cascades, and try to subvert official news framings through the use of culturally resonant affect, counterpoint, satire, exaggeration, sarcasm, and trolling. This is, in effect, a much more widely distributed, social media–enabled set of behaviors than those identified in Lang and Lang’s (2002) broadcast-era work on how television presenter commentary shapes audience perceptions.