Mindfuck
Page 2
It was liberating to find my voice. Like any teenager, I was exploring who I was, but for someone gay and in a wheelchair, this was an even bigger challenge. When I started attending these public forums, I began to realize that many of the things I was living through were not simply personal issues—they were also political issues. My challenges were political. My life was political. My mere existence was political. And so I decided to become political. An adviser to one of the MPs, a former software engineer named Jeff Silvester, took notice of this outspoken kid who always showed up. He offered to help me find a role in the Liberal Party of Canada (LPC), which was looking for tech help. Soon it was agreed: At the end of that summer I would start my first real job, as a political assistant at Parliament in Ottawa.
I spent the summer of 2007 in Montréal, hanging out in hacker spaces frequented by French Canadian techno-anarchists. They tended to gather in converted industrial buildings with concrete floors and plywood walls, in rooms decorated with retro tech like Apple IIs and Commodore 64s. By then, with treatment, I could shuffle around without a wheelchair. (I have continued to improve, but my physical limits were tested by my experience as a whistleblower. Just before the first Cambridge Analytica story was published, I had a seizure and collapsed, unconscious, on a South London sidewalk before waking up at University College Hospital to the sharp pain of a nurse inserting an IV needle into my arm.) Most hackers couldn’t care less what you look like or if you walk funny. They share your love of the craft and want to help you get better at it.
My brief exposure to hacking communities left a permanent impression. You learn that no system is absolute. Nothing is impenetrable, and barriers are a dare. The hacker philosophy taught me that if you shift your perspective on any system—a computer, a network, even society—you may discover flaws and vulnerabilities. As a gay kid in a wheelchair, I came to understand systems of power early on in life. But as a hacker, I learned that every system has weaknesses waiting to be exploited.
* * *
—
SHORTLY AFTER I STARTED my job at the Canadian Parliament, the Liberal Party took an interest in what was happening down south. At that time, Facebook was just becoming mainstream, and Twitter had just started gaining momentum; no one had any concept of how to use social media to campaign, because social media was in its infancy. But a rising star in U.S. presidential politics was about to hit the accelerator.
While other candidates were twiddling their thumbs trying to figure out the Internet, Barack Obama’s team set up My.BarackObama.com and started a grassroots revolution. While other sites (like Hillary Clinton’s) focused on putting up standard political advertisements, Obama’s website centered on providing a platform for grassroots organizations to organize and execute get-out-the-vote campaigns. His website ratcheted up excitement around the Illinois senator, who was much younger and more tech savvy than his opponents. Obama felt like what a leader is supposed to be. And after spending my formative years being told about my limits, the defiant optimism he instilled in that simple message of Yes, we can! spoke to me. Obama and his team were transforming politics, so when I was eighteen, I was among several people sent to the United States by the Liberal Party to observe different facets of his campaign and identify new tactics that could be transported back for progressive campaigns in Canada.
At first, I toured a couple of early primary states, starting with New Hampshire, where I spent time talking to voters and seeing up close what American culture was really like. This was both fun and eye-opening; coming from Canada, I was struck by how different our sensibilities were. The first time an American told me he was dead set against “socialized medicine,” the same kind of public healthcare I accessed almost every month back home, I was shocked that someone could even think this way. The hundredth time, not so much.
I liked roaming around and talking with people, so when it was time to switch focus to the data group, I wasn’t terribly excited to do it. But then I was introduced to Obama’s national director of targeting, Ken Strasma, who quickly changed my mind.
The sexy part of the Obama campaign was its branding and use of new media like YouTube. This was the cool stuff, the visual strategy nobody had used before because YouTube was still so new. That was what I wanted to see, until Ken stopped me short. Forget the videos, he told me. I needed to go deeper, into the heart of the campaign’s tech strategy. Everything we do, he said, is predicated on understanding exactly who we need to talk to, and on which issues.
In other words, the backbone of the Obama campaign was data. And the most important work Strasma’s team produced was the modeling they used to analyze and understand that data, which allowed them to translate it into an applied fit—to determine a real-world communications strategy through…artificial intelligence. Wait—AI for campaigns? It seemed vividly futuristic, as if they were building a robot that could devour reams of information about voters, then spit out targeting criteria. That information then traveled all the way up to the senior levels of the campaign, where it was used to determine key messages and branding for Obama.
The infrastructure for processing all this data came from a company then called the Voter Activation Network, Inc. (VAN), which was run by a fabulous gay couple from the Boston area, Mark Sullivan and Jim St. George. By the end of the 2008 campaign, thanks to VAN, the Democratic National Committee would have ten times more data on voters than it had after the 2004 campaign. This volume of data, and the tools to organize and manipulate it, gave Democrats a clear advantage in driving voters to the polls.
The more I learned about the Obama machine, the more fascinated I was. And I later got to ask all the questions I wanted of Mark and Jim, as they seemed to find it amusing that this young Canadian had come to America to learn about data and politics. Before I saw what Ken, Mark, and Jim were doing, I hadn’t thought about using math and AI to power a political campaign. In fact, when I first saw people lined up at computers at the Obama headquarters, I thought, Messages and emotions, not computers and numbers, are what create a winning campaign. But I learned that it was those numbers—and the predictive algorithms they created—that separated Obama from anyone who had ever run for president before.
As soon as I realized how effectively the Obama campaign was using algorithms to deliver its messages, I started studying how to create them on my own. I taught myself how to use basic software packages like MATLAB and SPSS, which let me mess around with data. Instead of relying on a textbook, I started by playing with the Iris data set—the classic data set for learning statistics—and learned by trial and error. Being able to manipulate the data, which involved using the different features of irises, like petal length and color, to predict species of flowers, was absolutely absorbing.
Once I understood the basics, I switched from petals to people. VAN was filled with information on age, gender, income, race, homeownership—even magazine subscriptions and airline miles. With the right data inputs, you could start to predict whether people would vote for Democrats or Republicans. You could identify and isolate the issues that were likely to be most important to them. You could begin to craft messages that would have a greater chance of swaying their opinions.
For me, this was a wholly new way of understanding elections. Data was a force for good, powering this campaign of change. It was being used to produce first-time voters, to reach people who felt left out. The deeper I got into it, the more I thought that data would be the savior of politics. I couldn’t wait to get back to Canada and share with the Liberal Party what I’d learned from the next president of the United States.
In November, Obama achieved a decisive victory over John McCain. Two months later, after friends in the campaign extended an invitation to the inauguration, I flew to Washington to party with the Democratic victors. (First came a slight kerfuffle at the door, when staff freaked out about letting the under-twenty-one me into the open-bar event.) I had an incredible
evening, chatting with Jennifer Lopez and Marc Anthony, watching Barack and Michelle Obama enjoy their first dance as the First Couple. A new era had dawned, and now came a chance to celebrate what could happen when the right people understood how to use data to win modern elections.
* * *
—
BUT BY DIRECTLY COMMUNICATING select messages to select voters, the microtargeting of the Obama campaign had started a journey toward the privatization of public discourse in America. Although direct mail had long been part of American campaigns, data-driven microtargeting allowed campaigns to match a myriad of granular narratives to granular universes of voters—your neighbor might receive a wholly different message than you did, with neither of you being the wiser. When campaigns were conducted in private, the scrutiny of debate and publicity could be avoided. The town square, the very foundation of American democracy, was incrementally being replaced by online ad networks. And without any scrutiny, campaign messages no longer even had to look like campaign messages. Social media created a new environment where campaigns could now appear, as Obama’s campaign piloted, as if your friend was sending you a message, without your realizing the source or calculated intent of that contact. A campaign could look like a news site, university, or public agency. With the ascendancy of social media, we have been forced to place our trust in political campaigns to be honest, because if lies are told, we may never notice. There is no one there to correct the record inside of a private ad network.
In the years leading up to the first Obama campaign, a new logic of accumulation emerged in the boardrooms of Silicon Valley: Tech companies began making money from their ability to map out and organize information. At the core of this model was an essential asymmetry in knowledge—the machines knew a lot about our behavior, but we knew very little about theirs. In a trade-off for convenience, these companies offered people information services in exchange for more information—data. The data has become more and more valuable, with Facebook making on average $30 from each of its 170 million American users. At the same time, we have fallen for the idea that these services are “free.” In reality, we pay with our data into a business model of extracting human attention.
More data led to more profits, and so design patterns were implemented to encourage users to share more and more about themselves. Platforms started to mimic casinos, with innovations like the infinite scroll and addictive features aimed at the brain’s reward systems. Services such as Gmail began trawling through our correspondence in a way that would land a traditional postal courier in jail. Live geo-tracking, once reserved for convicts’ ankle bracelets, was added to our phones, and what would have been called wiretapping in years past became a standard feature of countless applications.
Soon we were sharing personal information without the slightest hesitation. This was encouraged, in part, by a new vocabulary. What were in effect privately owned surveillance networks became “communities,” the people these networks used for profit were “users,” and addictive design was promoted as “user experience” or “engagement.” People’s identities began to be profiled from their “data exhaust” or “digital breadcrumbs.” For thousands of years, dominant economic models had focused on the extraction of natural resources and the conversion of these raw materials into commodities. Cotton was spun into fabric. Iron ore was smelted into steel. Forests were cut into timber. But with the advent of the Internet, it became possible to create commodities out of our lives—our behavior, our attention, our identity. People were processed into data. We would serve as the raw material of this new data-industrial complex.
One of the first people to spot the political potential of this new reality was Steve Bannon, the relatively unknown editor of right-wing website Breitbart News, which was founded to reframe American culture according to the nationalist vision of Andrew Breitbart. Bannon saw his mission as nothing short of cultural warfare, but when I first encountered him, Bannon knew that something was missing, that he didn’t have the right weapons. Whereas field generals focused on artillery power and air dominance, Bannon needed to gain cultural power and informational dominance—a data-powered arsenal suited to conquer hearts and minds in this new battlespace. The newly formed Cambridge Analytica became that arsenal. Refining techniques from military psychological operations (PSYOPS), Cambridge Analytica propelled Steve Bannon’s alt-right insurgency into its ascendancy. In this new war, the American voter became a target of confusion, manipulation, and deception. Truth was replaced by alternative narratives and virtual realities.
Cambridge Analytica first piloted this new warfare in Africa and tropical islands around the world. The firm experimented with scaled online disinformation, fake news, and mass profiling. It worked with Russian agents and employed hackers to break into opposition candidates’ email accounts. Soon enough, having perfected its methods far from the attention of Western media, CA shifted from instigating tribal conflict in Africa to instigating tribal conflict in America. Seemingly out of nowhere, an uprising erupted in America with manic cries of MAGA! and Build the wall! Presidential debates suddenly shifted from policy positions into bizarre arguments about what was real news and what was fake news. America is now living in the aftermath of the first scaled deployment of a psychological weapon of mass destruction.
As one of the creators of Cambridge Analytica, I share responsibility for what happened, and I know that I have a profound obligation to right the wrongs of my past. Like so many people in technology, I stupidly fell for the hubristic allure of Facebook’s call to “move fast and break things.” I’ve never regretted something so much. I moved fast, I built things of immense power, and I never fully appreciated what I was breaking until it was too late.
* * *
—
AS I MADE MY WAY to the secure facility deep under the Capitol that day in the early summer of 2018, I felt numbed to what was happening around me. Republicans were already conducting opposition research on me. Facebook was using PR firms to smear its critics, and its lawyers had threatened to report me to the FBI for an unspecified cybercrime. The DOJ was now under the control of a Trump administration that was publicly ignoring long-held legal conventions. I had enraged so many interests that my lawyers were genuinely concerned the FBI might arrest me after I was finished. One of my lawyers told me the safest thing to do was stay in Europe.
I cannot, for security and legal reasons, quote directly from my testimony in Washington. But I can tell you that I walked into that room with two large binders, each containing several hundred pages of documents. The first binder contained emails, memos, and documents showing the extent of Cambridge Analytica’s data-harvesting operation. This material demonstrated that the company had recruited hackers, hired personnel with known links to Russian intelligence, and engaged in bribery, extortion, and disinformation campaigns in elections around the world. There were confidential legal memos from lawyers warning Steve Bannon about Cambridge Analytica’s violations of the Foreign Agents Registration Act, as well as a cache of documents describing how the firm exploited Facebook to access more than eighty-seven million private accounts and used that data in efforts to suppress the votes of African Americans.
The second binder was more sensitive. It contained hundreds of pages of emails, financial documents, and transcripts of audio recordings and text messages that I had covertly procured in London earlier that year. These files had been sought by U.S. intelligence and detailed the close relationships between the Russian embassy in London and both Trump associates and leading Brexit campaigners. This file showed that leading British alt-right figures met with the Russian embassy before and after they flew to meet the Trump campaign, and that at least three of them were receiving offers of preferential investment opportunities in Russian mining companies potentially worth millions. What became clear in these communications was how early the Russian government had identified the Anglo-American alt-right network, and that it may
have groomed figures within it to become access agents to Donald Trump. It showed the connections among the major events of 2016: the rise of the alt-right, the surprise passage of Brexit, and the election of Trump.
Four hours went by. Five. I was deep into describing Facebook’s role in—and culpability for—what had happened.
Did the data used by Cambridge Analytica ever get into the hands of potential Russian agents? Yes.
Do you believe there was a nexus of Russian state-sponsored activity in London during the 2016 presidential election and Brexit campaigns? Yes.
Was there communication between Cambridge Analytica and WikiLeaks? Yes.
I finally saw glimmers of understanding coming into the committee members’ eyes. Facebook is no longer just a company, I told them. It’s a doorway into the minds of the American people, and Mark Zuckerberg left that door wide open for Cambridge Analytica, the Russians, and who knows how many others. Facebook is a monopoly, but its behavior is more than a regulatory issue—it’s a threat to national security. The concentration of power that Facebook enjoys is a danger to American democracy.