Mindfuck

Home > Other > Mindfuck > Page 16
Mindfuck Page 16

by Christopher Wylie


  Bannon’s first request of our team was to study who felt oppressed by political correctness. Cambridge Analytica found that, because people often overestimate how much others notice them, spotlighting socially uncomfortable situations was an effective prime for eliciting bias in target cohorts, such as when you get in trouble for mispronouncing a foreign-sounding name. One of the most effective messages the firm tested was getting subjects to “imagine an America where you can’t pronounce anyone’s name.” Subjects would be shown a series of uncommon names and then asked, “How hard is it to pronounce this name? Can you recall a time where people were laughing at someone who messed up an ethnic name? Do some people use political correctness to make others feel dumb or to get ahead?”

  People reacted strongly to the notion that “liberals” were seeking new ways to mock and shame them, along with the idea that political correctness was a method of persecution. An effective Cambridge Analytica technique was to show subjects blogs that made fun of white people like them, such as People of Walmart. Bannon had been observing online communities on places like 4chan and Reddit for years, and he knew how often subgroups of angry young white men would share content of “liberal elites” mocking “regular” Americans. There had always been publications that parodied the “hicks” of flyover country, but social media represented an extraordinary opportunity to rub “regular” Americans’ noses in the snobbery of coastal elites.

  Cambridge Analytica began to use this content to touch on an implied belief about racial competition for attention and resources—that race relations were a zero-sum game. The more they take, the less you have, and they use political correctness so you cannot speak out. This framing of political correctness as an identity threat catalyzed a “boomerang” effect in people where counternarratives would actually strengthen, not weaken, the prior bias or belief. This means that when targets would see clips containing criticism of racist statements by candidates or celebrities, this exposure would have the effect of further entrenching the target’s racialized views, rather than causing them to question those beliefs. In this way, if you could frame racialized views through the lens of identity prior to exposure to a counternarrative, that counternarrative would be interpreted as an attack on identity instead. What was so useful for Bannon was that it in effect inoculated target groups from counternarratives criticizing ethno-nationalism. It created a wicked reinforcement cycle in which the cohort would strengthen their racialized views when they were exposed to criticism. This may be in part because the area of the brain that is most highly activated when we process strongly held beliefs is the same area that is involved when we think about who we are and our identity. Later, when Donald Trump was aggressively criticized in the media for racist or misogynist statements, these critiques likely created a similar effect, where the criticism of Trump strengthened the resolve of supporters who would internalize the critique as a threat to their very identity.

  By making people angry in this way, CA was following a fairly wide corpus of research showing that anger interferes with information seeking. This is why people can “jump to conclusions” in a fit of rage, even if later they regret their decisions. In one experiment, CA would show people on online panels pictures of simple bar graphs about uncontroversial things (e.g., the usage rates of mobile phones or sales of a car type) and the majority would be able to read the graph correctly. However, unbeknownst to the respondents, the data behind these graphs had actually been derived from politically controversial topics, such as income inequality, climate change, or deaths from gun violence. When the labels of the same graphs were later switched to their actual controversial topic, respondents who were made angry by identity threats were more likely to misread the relabeled graphs that they had previously understood.

  What CA observed was that when respondents were angry, their need for complete and rational explanations was also significantly reduced. In particular, anger put people in a frame of mind in which they were more indiscriminately punitive, particularly to out-groups. They would also underestimate the risk of negative outcomes. This led CA to discover that even if a hypothetical trade war with China or Mexico meant the loss of American jobs and profits, people primed with anger would tolerate that domestic economic damage if it meant they could use a trade war to punish immigrant groups and urban liberals.

  Bannon was convinced that if you showed people what political correctness “really meant,” they would wake up to the truth. So Cambridge Analytica started asking subjects if the thought of their daughter marrying a Mexican immigrant made them feel uncomfortable. For subjects who denied discomfort with the idea, a prompt would then follow: “Did you feel like you had to say that?” Subjects would be given permission to change their answers, and many did. After the Facebook data was collected, CA began exploring ways of taking this further by pulling photos of daughters of white men in order to pair them with photos of black men—to show white men what political correctness “really looked like.”

  Cambridge Analytica’s research panels also identified that there were relationships between target attitudes and a psychological effect called the just-world hypothesis (JWH). This is a cognitive bias where some people rely on a presumption of a fair world: The world is a fair place where bad things “happen for a reason” or will be offset by some sort of “moral balancing” in the universe. We found that people who displayed the JWH bias were, for example, more prone to victim-blaming in hypothetical scenarios of sexual assault. If the world is fair, then random bad things should not happen to innocent people, and therefore there must have been a fault in the victim’s behavior. Finding ways to blame victims is psychologically prophylactic for some people because it helps them cope with anxiety induced by uncontrollable environmental threats while maintaining a comforting view that the world will still be fair to them.

  Cambridge Analytica found that JWH was related to many attitudes, but that it had a special relationship with racial bias. People who displayed JWH were more likely to agree with the idea that minorities were to blame for socioeconomic disparities between races. In other words, blacks have had all this time to achieve for themselves, but they have nothing to show for it. Maybe it wasn’t racist to suggest that minorities were not able to create their own success, subjects were told—maybe it was just realistic.

  CA then discovered that for those with evangelical worldviews in particular, a “just world” exists because God rewards people with success if they follow his rules. In other words, people who live good lives won’t get preexisting conditions, and they will succeed in life, even if they are black. Cambridge Analytica began feeding these cohorts narratives with an expanded religious valence. “God is fair and just, right? Wealthy people are blessed by God for a reason, right? Because He is fair. If minorities complain about receiving less, perhaps there is a reason—because He is fair. Or are you daring to question God?”

  This gave CA a way to cultivate more punitive views toward “the other.” If the world is fair and governed by a just God, then refugees are suffering for a reason. Over time, subjects would increasingly discount examples of valid refugee claims under U.S. law and instead focus on how and why the claimants should be punished. And in some cases, the stronger the refugee claim, the harsher the responses. The targets were less and less concerned with hypothetical refugees and more concerned with maintaining the consistency of their worldview. If you are strongly invested in the idea that the world is just, evidence to the contrary can feel deeply threatening.

  For Bannon’s free thinkers, race reality was not only becoming their reality, it was becoming God’s reality—a connection with a long history in America. From the time slaves were first brought to America, preachers drew from the book of Ephesians to justify the practice, quoting the line “Servants, be obedient to them that are your masters” as evidence that slave ownership was godly. In the early nineteenth century, Episcopal bishop Stephen Elliott suggested that
those who wished to end slavery were behaving in an ungodly way. They should, he wrote, “consider whether, by their interference with this institution, they may not be checking and impeding a work which is manifestly providential,” as millions of “semi-barbarous people” had “learned the way to Heaven and…have been made to know their Savior through the means of African slavery!” In the post–Civil War South, states enacted “black codes” that curtailed black citizens’ newfound freedom. In cities such as Memphis and New Orleans, white politicians and city officials used fearmongering to provoke bloody riots that took dozens of black lives. Jim Crow laws, enacted in the late nineteenth and early twentieth centuries, ensured that for decades to come, the races would remain segregated in public spaces. Poll taxes rendered many blacks in the South all but unable to vote. And the Ku Klux Klan, which had virtually disappeared just after the Civil War, enjoyed a resurgence in the early twentieth century, in part by presenting itself as a national patriotic organization.

  The Civil Rights Act of 1964 and the Voting Rights Act of 1965 represented a huge leap forward for the rights of American blacks. These sweeping sets of laws promised to right many of the wrongs that had been perpetrated against the black community for so many years by ensuring voting rights, mandating desegregation of public facilities, and instituting equal employment opportunity and nondiscrimination in federal programs. They also opened a new chapter in the politics of shamelessly stoking white fear.

  In the late 1960s, Richard Nixon’s “southern strategy” fueled racial fear and tensions in order to shift white voters’ allegiance from the Democrats to the GOP. Nixon ran his 1968 presidential campaign on the twin pillars of “states’ rights” and “law and order”—both of which were obvious, racially coded dog whistles. In his 1980 campaign, Ronald Reagan repeatedly invoked the “welfare queen”—a black woman who supposedly was able to buy a Cadillac on government assistance. In 1988, George H. W. Bush’s campaign ran the infamous Willie Horton ad, terrifying white voters with visions of wild-haired black criminals running amok.

  Steve Bannon aimed to affirm the ugliest biases in the American psyche and convince those who possessed them that they were the victims, that they had been forced to suppress their true feelings for too long. Deep within America’s soul lurked an explosive tension. Bannon had long sensed this, and now he had the data to prove it. History, Bannon was convinced, would prove to be on his side, and the right tools would hasten his prophecy. Young people, with their lack of opportunities stemming from a corpulent state and a corrupt finance system, were primed to rebel. They just did not know it yet. Bannon wanted them to understand their role in his prophecy of revolution—that they would lead a generational “turning” of history and become the “artists” who would redraw a new society filled with meaning and purpose after its “great unraveling.” Major figures in history, he said, were artists: Franco and Hitler were painters, while Stalin, Mao, and bin Laden were all poets. He understood that movements adopt a new aesthetic for society. Bannon asked why dictators always lock up the poets and artists first. Because they are often artists themselves. And for Bannon, this movement was primed to become his great performance. It was the fulfilling of his prophecy by making real the narratives of his favorite books, like The Fourth Turning, which predicts an impending crisis followed by a forgotten generation rising up in rebellion, or The Camp of the Saints, where Western civilization collapses from the weight of caravans of immigrant invaders.

  But Bannon needed an army to unleash chaos. For him, this was an insurgency, and to inspire total loyalty and total engagement, he was prepared to use any narrative that worked. The exploitation of cognitive biases, for Bannon, was simply a means of “de-programming” his targets from the “conditioning” they had endured growing up in a vapid and meaningless society. Bannon wanted his targets to “discover themselves” and “become who they really were.” But the tools created at Cambridge Analytica in 2014 were not about self-actualization; they were used to accentuate people’s innermost demons in order to build what Bannon called his “movement.” By targeting people with specific psychological vulnerabilities, the firm victimized them into joining what was nothing more than a cult led by false prophets, where reason and facts would have little effect on its new followers, digitally isolated as they now were from inconvenient narratives.

  In the last discussion I ever had with Bannon, he told me that to fundamentally change society, “you have to break everything.” And that’s what he wanted to do—to fracture “the establishment.” Bannon faulted “big government” and “big capitalism” for suppressing the randomness that is essential to human experience. He wanted to liberate the people from a controlling administrative state that made choices for them and thus removed purpose from their lives. He wanted to bring about chaos to end the tyranny of certainty within the administrative state. Steve Bannon did not want, and would not tolerate, the state dictating America’s destiny.

  CHAPTER 8

  FROM RUSSIA WITH LIKES

  -

  Keeping true to its origins in foreign information operations, there were new characters arriving at Cambridge Analytica’s London office almost daily. The firm became a revolving door of foreign politicians, fixers, security agencies, and businessmen with their scantily clad private secretaries in tow. It was obvious that many of these men were associates of Russian oligarchs who wanted to influence a foreign government, but their interest in foreign politics was rarely ideological. Rather, they were usually either seeking help to stash money somewhere discreet, or to retrieve money that was sitting in a frozen account somewhere in the world. Staff were told to just ignore the comings and goings of these men and not ask too many questions, but staff would joke about it on internal chat logs, and the visiting Russians in particular were usually the more eccentric variety of clients we would encounter. When the firm would conduct internal research on these potential clients, we would hear through the grapevine about the amusing hobbies or bizarre sexual escapades these powerful men would get up to. And I did, admittedly, turn a blind eye to the firm’s meetings with suspicious-looking clients. I knew it would just get me in trouble with Nix if I asked too many questions that he didn’t care for. But at the time, in spring 2014, just two years before any Russian disinformation efforts hit the U.S. presidential election, there wasn’t anything innately suspicious about these Russians beyond the typical bread-and-butter shadiness that the firm engaged in. That is, except for one prospective client that CA executives became both very giddy and unusually elusive about.

  In the spring of 2014, the large Russian oil company Lukoil contacted Cambridge Analytica and began asking questions. At first, Nix handled the conversations, but soon the oil executives wanted answers that he was incapable of providing. He sent Lukoil CEO Vagit Alekperov a white paper I’d written about Cambridge Analytica’s data targeting projects in the United States, after which Lukoil asked for a meeting. Nix said that I should come along. “They understand behavioral micro-targeting in the context of elections (as per your excellent document/white paper) but they are failing to make the connection between voters and their consumers,” he wrote in an email.

  Well, I was failing to make that connection, too. Lukoil was a major force in the global economy—the largest privately owned company in Putin’s kleptocracy—but I couldn’t see an obvious link between a Russian oil company and CA’s work in the United States. And Nix was of no help. “Oh, you know how these things are,” he told me. “You just lift your skirt a little, and then they give you money.” In other words, he wasn’t interested in the details. If Lukoil wanted to pay for our data, why should we care what they did with it?

  Shortly after the first Lukoil approach, a 2014 memo on CA’s internal capacity was drafted and sent to Nix. The briefing discussed in euphemistic terms what the firm, at least in theory, was capable of setting up were there to be a project that needed special intelligence services or scaled
disinformation operations on social media. (As the memo was internal, it referenced SCL; Cambridge Analytica was merely a front-facing brand for American clients that was entirely staffed by SCL personnel.) “SCL retains a number of retired intelligence and security agency officers from Israel, USA, UK, Spain & Russia each with extensive technical and analytical experience,” the memo read. “Our experience shows that in many cases utilizing social media or ‘foreign’ publications to ‘expose’ an opponent is often more effectual than using potentially biased local media channels.” The memo discussed “infiltrating” opposition campaigns using “intelligence nets” to obtain “damaging information” and creating scaled networks of “Facebook and Twitter accounts to build credibility and cultivate followers.” For many of SCL’s clients, this was a standard offer—private espionage, stings, bribes, extortion, infiltrations, honey traps, and disinformation spread through fake accounts on social media. For the right price, SCL was willing to do whatever it took to win an election. And now, armed with even more extensive data sets and AI capabilities, and millions invested, the newly formed Cambridge Analytica was looking to take this further.

  The Lukoil execs came to London, where Nix had prepared a pitch deck of slides for the meeting. I sat back in my chair, curious to discover what the hell he was actually pitching. The first couple of slides outlined an SCL project in Nigeria aimed at undermining voters’ confidence in civic institutions. Labeled “Election: Inoculation,” the material described how to spread rumors and disinformation to sway election results. Nix played videos of emotional voters convinced that the upcoming Nigerian election would be rigged.

  “We made them think that,” he said with delight.

  The next set of slides described how SCL had worked to fix elections in Nigeria, complete with videos of voters saying how worried they were about rumors of violence and upheaval. “And we made them think that too,” Nix said.

 

‹ Prev