Book Read Free

The Death of Truth

Page 8

by Michiko Kakutani


  The same web that’s democratized information, forced (some) governments to be more transparent, and enabled everyone from political dissidents to scientists and doctors to connect with one another—that same web, people are learning, can be exploited by bad actors to spread misinformation and disinformation, cruelty and prejudice. The possibility of anonymity on the web has promoted a toxic lack of accountability and enabled harassers and trolls. Giant Silicon Valley companies have collected user data on a scale rivaling that of the NSA. And the explosion of internet use has also amplified many of the dynamics already at play in contemporary culture: from the self-absorption of the “Me” and “selfie” generations, to the isolation of people in ideological silos and the relativization of truth.

  The sheer volume of data on the web allows people to cherry-pick facts or factoids or nonfacts that support their own point of view, encouraging academics and amateurs alike to find material to support their theories rather than examining empirical evidence to come to rational conclusions. As Nicholas Carr, the former executive editor of the Harvard Business Review, wrote in The Shallows: What the Internet Is Doing to Our Brains, “We don’t see the forest when we search the Web. We don’t even see the trees. We see twigs and leaves.”

  On the web, where clicks are everything, and entertainment and news are increasingly blurred, material that is sensational, bizarre, or outrageous rises to the top, along with posts that cynically appeal to the reptilian part of our brains—to primitive emotions like fear and hate and anger.

  In this era of nervous distraction and information overload, attention is the most precious commodity on the internet. And as the law professor Tim Wu observed in his book The Attention Merchants, sites gradually learned in the early 2010s how to make content consistently go viral: often the “urge to share was activated by a spectrum of ‘high-arousal’ emotions, like awe, outrage, and anxiety.”

  By 2015, Wu wrote, the web—once “a commons that fostered the amateur eccentric in every area of interest”—was overrun by “commercial junk, much of it directed at the very basest human impulses of voyeurism and titillation.” There were “vast areas of darkness” now—like “the lands of the cajoling listicles and the celebrity nonstories”—that were “engineered for no purpose but to keep a public mindlessly clicking and sharing away, spreading the accompanying ads like a bad cold.”

  * * *

  —

  While public trust in the media declined in the new millennium (part of a growing mistrust of institutions and gatekeepers, as well as a concerted effort by the right wing to discredit the mainstream press), more and more people started getting their news through Facebook, Twitter, and other online sources: by 2017, two-thirds of Americans said they got at least some of their news through social media. This reliance on family and friends and Facebook and Twitter for news, however, would feed the ravenous monster of fake news.

  Fake news is nothing new, of course: sensationalized press coverage helped drum up public support for the Spanish-American War, and Julius Caesar spun his conquest of Gaul as a preventive action. But the internet and social media allow rumors, speculation, and lies to flash around the world in a matter of seconds: like the preposterous Pizzagate stories and the baseless stories claiming that the man behind the massacre of fifty-eight people in Las Vegas in October 2017 was an anti-Trump liberal who followed MoveOn.org and had recently become a Muslim.

  During the last three months of the 2016 presidential campaign, BuzzFeed News reported, “top-performing” fake election news stories on Facebook generated more reader engagement than top stories from major news organizations like The New York Times, The Washington Post, NBC News, and The Huffington Post. Of the twenty fake stories, all but three were pro-Trump or anti–Hillary Clinton, including one which claimed that Clinton had sold weapons to ISIS and another which claimed that the pope had endorsed Trump. A study from Oxford University’s Internet Institute found that, on Twitter, a network of Trump supporters circulated more junk news than any other political group in the sample. And a 2018 Politico analysis found that voters in so-called news deserts—places with low numbers of news subscribers—went for Trump in greater numbers than voters in places where independent media could check his assertions.

  As the role that social media had played in spreading fake news and enabling Russian efforts to interfere in the 2016 U.S. election became increasingly clear, some Silicon Valley insiders experienced a kind of existential crisis. They worried that the magical tools they had helped create were becoming Frankensteinian monsters. Pierre Omidyar, founder of eBay, wrote that “the monetization and manipulation of information is swiftly tearing us apart,” and commissioned a white paper on the effect that social media was having on accountability and trust and our democracy.

  “The system is failing,” Tim Berners-Lee declared. He was still an optimist, he said, “but an optimist standing at the top of the hill with a nasty storm blowing in my face, hanging on to a fence.”

  In an impassioned essay, Roger McNamee, an early investor in Facebook, argued that the Russians’ manipulation of Facebook, Twitter, Google, and other platforms to try to shift the outcomes of the 2016 U.S. election and the Brexit referendum was just the tip of a huge iceberg: unless fundamental changes were made, he warned, those platforms were going to be manipulated again, and “the level of political discourse, already in the gutter, was going to get even worse.”

  The problems were inherent, McNamee argued, in the algorithms used by platforms like Facebook to maximize user engagement. The more time members spend on a platform, the more ads a company sells and the more profits it makes, and the way to maximize engagement is by “sucking up and analyzing your data, using it to predict what will cause you to react most strongly, and then giving you more of that.” This not only creates the filter bubbles that seal people off in partisan silos but also favors simplistic, provocative messages. Conspiracy theories easily go viral on social media. And so do dumbed-down, inflammatory political messages—like those retailed by the Trump campaign and the Vote Leave party in Britain, appealing to raw emotions like the fear of immigrants or anger over disappearing jobs. Such populist messages, historians attest, tend to gain traction during times of economic uncertainty (as in the lingering aftermath of the financial crisis of 2008 and snowballing income inequality) and cultural and social change (as with globalization and seismic technological innovation).

  Trump’s hate-fueled message was almost tailor-made for social media algorithms. Steve Bannon told the journalist Michael Lewis that Trump not only was an angry man but also had a unique ability to tap into the anger of others: “We got elected on Drain the Swamp, Lock Her Up, Build a Wall. This was pure anger. Anger and fear is what gets people to the polls.”

  At the same time, the Trump campaign made shrewd and Machiavellian use of social media and big-data tools, employing information from Facebook and Cambridge Analytica (a data science firm partially owned by the Trump backer and Breitbart investor Robert Mercer that boasts of its ability to psychologically profile millions of potential voters) to target its advertising and plan Trump’s campaign stops.

  Facebook revealed that the data of as many as 87 million people may have been shared improperly with Cambridge Analytica, which used the information to help create tools designed to predict and influence voter behavior. A former employee of Cambridge Analytica said that Steve Bannon oversaw a 2014 voter persuasion effort in which anti-establishment messages—like “drain the swamp” and “deep state”—were identified and tested.

  The Trump campaign’s digital director, Brad Parscale, recounted how they used Facebook’s advertising tools to micro-target potential supporters with customized ads, making some fifty to sixty thousand ads a day, continually tweaking language, graphics, even colors, to try to elicit a favorable response.

  The campaign also used so-called dark posts (visible only to the recipie
nt) and launched three voter-suppression operations, according to a senior campaign official quoted in Bloomberg Businessweek: one was targeted at Bernie Sanders supporters; one at young women (who, the campaign thought, might be offended by reminders of Bill Clinton’s philandering—odd, given Trump’s own scandals with women); and one at African Americans (who the campaign thought might not vote for Clinton if reminded of her use of the term “super predators” in 1996, referring to her husband’s anticrime initiative).

  * * *

  —

  The master manipulators of social media in the 2016 election, of course, were the Russians whose long-term goal—to erode voters’ faith in democracy and the electoral system—dovetailed with their short-term goal of tipping the outcome toward Trump. U.S. intelligence agencies also concluded that Russian hackers stole emails from the Democratic National Committee, which were later provided to WikiLeaks. These plots were all part of a concerted effort by the Kremlin, stepped up since Putin’s reelection in 2012, to use asymmetrical, nonmilitary means to achieve its goals of weakening the European Union and NATO and undermining faith in globalism and Western democratic liberalism. Toward such ends, Russia has been supporting populist parties in Europe, like Marine Le Pen’s far-right National Front party in France, and has interfered in the elections of at least nineteen European countries in recent years. It also continues to wage disinformation campaigns through state media outlets like Sputnik and RT.

  In the case of the American election, Facebook told Congress that Russian operatives published some eighty thousand posts on Facebook between June 2015 and August 2017 that might have been seen by 126 million Americans; that’s more than half the number of people registered to vote in the country. Some of the Russian posts actively tried to promote Trump or damage Clinton; others were simply meant to widen existing divisions in American society over issues like race, immigration, and gun rights. For instance, there was a post from a phony group named South United, showing a Confederate flag and “a call for the South to rise again.” Another from a phony group called Blacktivist, memorializing the Black Panthers. And a Facebook ad called “Secured Borders,” showing a sign saying, “No Invaders Allowed.”

  “The strategy is to take a crack in our society and turn it into a chasm,” said Senator Angus King of Maine during a Senate Intelligence Committee hearing on Russian interference in the election.

  Reporting from several publications found that YouTube’s recommendation engine seemed to be steering viewers toward divisive, sensationalistic, and conspiracy-minded content. And Twitter found that more than fifty thousand Russia-linked accounts on its platform were posting material about the 2016 election. A report from Oxford University found that in the run-up to the election the number of links on Twitter to “Russian news stories, unverified or irrelevant links to WikiLeaks pages, or junk news” exceeded the number of links to professionally researched and published news. The report also found that “average levels of misinformation were higher in swing states”—like Florida, North Carolina, and Virginia—than in uncontested states.

  Russians had become very adept not only at generating fake news but also at inventing fake Americans who commented on that fake news and joined fake American groups. A Russian troll factory employee named Vitaly Bespalov, who worked at a St. Petersburg propaganda factory called the Internet Research Agency, told NBC News that the job was “a merry-go-round of lies.” Workers on the first floor wrote fake news stories referencing blog posts written by workers on the third floor, while colleagues posted comments on those stories under fake names and coordinated other social media posts. According to U.S. intelligence sources, some of the IRA’s accounts had been producing pro-Russian propaganda about Ukraine but switched over to pro-Trump messages as early as December 2015.

  When the Access Hollywood tape of Trump talking about groping women came out before the election, Russian Twitter agents rushed to his rescue, trashing the mainstream media and trying to refocus attention on damaging emails hacked from Clinton’s campaign chairman, John Podesta. This sort of support for Trump continued after he took up residence in the White House, with pro-Kremlin Twitter accounts trying to stir up trouble over matters like the controversy of NFL players taking a knee. By the end of 2017, however, these Russian accounts seemed to be increasingly focused on undermining special counsel Robert Mueller and his investigation into Russian interference in the election.

  Russia also appears to have jumped into the U.S. debate over the Trump administration’s determination to repeal net neutrality—a move that was opposed by 83 percent of Americans in a poll taken shortly before the FCC voted to do away with the Obama-era rules that required internet providers to treat all web traffic equally. Before announcing its decision, the FCC had said it welcomed public comment on the issue, but it appears that many of the comments it received were fakes or duplicates. One study found that 444,938 comments came from Russian email addresses and that more than 7.75 million comments came from email domains associated with FakeMailGenerator.com and contained virtually identical wording.

  Troll factories and bot armies are used by political parties and governments of countries like Russia, Turkey, and Iran to spread propaganda, harass dissenters, flood social networks with misinformation, and create the illusion of popularity or momentum through likes, retweets, or shares. An Oxford University study noted, “Sometimes, when political parties or candidates use social media manipulation as part of their campaign strategy, these tactics are continued when they assume power. For example, in the Philippines, many of the so-called ‘keyboard trolls’ hired to spread propaganda for presidential candidate Duterte during the election continue to spread and amplify messages in support of his policies now that he’s in power.”

  * * *

  —

  The use of bots in manipulating public opinion is just one of the factors examined in the Omidyar Group report on social media’s effect on public discourse. In addition to amplifying polarization, the report concluded, social media tends to undermine trust in institutions and makes it more difficult to have the sorts of fact-based debates and discussions that are essential to democracy. The micro-targeted ads on social media and the algorithms designed to customize people’s news feeds blur the distinctions between what is popular and what is verifiable, and diminish the ability of people to take part in a shared conversation.

  Things are only likely to get worse, particularly if the Trump White House remains in denial about Russian interference in the election and fails to take action against what Michael Hayden, a former director of the NSA and the CIA, has called the “most successful covert-influence operation in history.” The head of the Cyber Division at the Department of Homeland Security revealed that the Russians attempted to break into the election systems in twenty-one states during the 2016 election and successfully penetrated a few. And a computer security firm reported that the same Russian hackers who stole DNC emails in 2016 were targeting Senate accounts in the run-up to the 2018 midterms.

  Russia already tried to meddle in elections in Germany, France, and the Netherlands, as well as the Brexit referendum in the U.K., and the ease with which it interfered in the 2016 U.S. election (and the lack of penalties it suffered in year one of the Trump administration) have surely emboldened it. Politicians in Mexico and other countries now fear they might be next on Putin’s hit list and are bracing for destabilizing waves of fake news and propaganda.

  Technological developments are likely to complicate matters further. Advances in virtual reality and machine-learning systems will soon result in fabricated images and videos so convincing that they may be difficult to distinguish from the real thing. Voices can already be re-created from audio samples, and facial expressions can be manipulated by AI programs. In the future, we could be exposed to realistic videos of politicians saying things they never said: Baudrillard’s simulacrum come to life. These are Black Mirror–like
developments that will complicate our ability to distinguish between the imitation and the real, the fake and the true.

  8

  “THE FIREHOSE OF FALSEHOOD”

  PROPAGANDA AND FAKE NEWS

  You can sway a thousand men by appealing to their prejudices quicker than you can convince one man by logic.

  —ROBERT A. HEINLEIN

  Russia is at the center of political conversations in America and Europe because of Russian interference in the 2016 U.S. presidential election and a host of other elections around the world. The methods used by Russia in these operations are reminders of the sophisticated propaganda machine that the Kremlin has built over the decades, going back to the Cold War, and its new mastery of cyber warfare, including hacking, fake news, and the weaponized use of social media. At the same time, not so coincidentally, the thinking of two Russian figures—Vladimir Lenin and the much lesser known Vladislav Surkov, a former postmodernist theater director who’s been described as “Putin’s Rasputin” and the Kremlin’s propaganda puppet master—informs many of the troubling political and social dynamics at work in the post-truth era.

  Almost a century after his death, Lenin’s model of revolution has proven frighteningly durable. His goal—not to improve the state machine, but to smash it and all its institutions—has been embraced by many twenty-first-century populists. And so have many of his tactics, from his use of confusion and chaos as tools to rally the masses, to his simplistic (and always broken) utopian promises, to his violent rhetoric attacking anything that could possibly be tarred as part of the status quo.

  His incendiary language, Lenin once explained, was “calculated to evoke hatred, aversion and contempt”; such wording was “calculated not to convince, but to break up the ranks of the opponent, not to correct the mistake of the opponent, but to destroy him, to wipe his organization off the face of the earth. This wording is indeed of such a nature as to evoke the worst thoughts, the worst suspicions about the opponent.” All of which sounds a lot like a template for the sort of language employed by Trump and his supporters in attacking Hillary Clinton during the 2016 campaign (“Lock her up!”), the sort of language employed by radical supporters of the Vote Leave campaign in Britain, the sort of language increasingly employed by right-wing populist movements on both sides of the Atlantic.

 

‹ Prev