Mindfuck
Page 28
What was supposed to be so brilliant about the Internet was that people would suddenly be able to erode all those barriers and talk to anyone, anywhere. But what actually happened was an amplification of the same trends that took hold of a country’s physical spaces. People spend hours on social media, following people like them, reading news articles “curated” for them by algorithms whose only morality is click-through rates—articles that do nothing but reinforce a unidimensional point of view and take users to extremes to keep them clicking. What we’re seeing is a cognitive segregation, where people exist in their own informational ghettos. We are seeing the segregation of our realities. If Facebook is a “community,” it is a gated one.
Shared experience is the fundamental basis for solidarity among citizens in a modern pluralistic democracy, and the story of the civil rights movement is, in part, the story of being able to share space together: being in the same part of the movie theater or using the same water fountain or bathroom. Segregation in America has always manifested itself in insidiously mundane ways—through separate bus seats, water fountains, schools, theater tickets, and park benches. And perhaps now on social media. For Rosa Parks, being ordered to give up her bus seat was just one of the countless ways white America systematically ensured that her dark skin was separated and unseen—that she remained the other, not part of their America. And although we no longer allow buildings to segregate their entrances based on a guest’s race, segregation rests at the heart of the architectures of the Internet.
From social isolation comes the raw material of both conspiracism and populism: mistrust. Cambridge Analytica was the inevitable product of this balkanized cyberspace. The company was able to get its targets addicted to rage only because there was nothing to prevent it from doing so—and so, unimpeded, the company drowned them in a maelstrom of disinformation, with predictably disastrous results. But simply stopping CA is not enough. America’s newfound crisis of perception will only continue to worsen until we address the underlying architectures that got us here. And the consequences of inaction would be dire. The destruction of mutual experience is the essential first step to othering, to denying another perspective on what it means to be one of us.
Steve Bannon recognized that the “virtual” worlds of the Internet are so much more real than most people realize. Americans check their phones on average fifty-two times per day. Many now sleep with their phones charging beside them—they sleep with their phones more than they sleep with people. The first and last thing they see in their waking hours is a screen. And what people see on that screen can motivate them to commit acts of hatred and, in some cases, acts of extreme violence. There is no such thing as “just online” anymore, and online information—or disinformation—that engages its targets can lead to horrific tragedies. In response, Facebook, like the NRA, evades its moral responsibility by invoking the same kind of “Guns don’t kill people” argument. They throw up their hands and claim they can’t control how their users abuse their products, even when mass murder results. If ethnic cleansing is not enough for them to act, what is? When Facebook goes on yet another apology tour, loudly professing that “we will try harder,” its empty rhetoric is nothing more than the thoughts and prayers of a technology company content to profit from a status quo of inaction. For Facebook, the lives of victims have become an externality of their continued quest to move fast and break things.
When I came out as a whistleblower, the alt-right’s digital rage machine turned its sights to me. In London, enraged Brexiteers pushed me into oncoming traffic. I was followed around by alt-right stalkers and had photos of me at clubs with my friends published on alt-right websites with information about where to find me. When it came time to testify at the European Parliament, conspiracies about Facebook’s critics were beginning to percolate through forums of the alt-right. As I testified, there were chants of “Soros, Soros, Soros” in the back. As I was leaving the European Parliament, a man came up to me on the street, shouting “Jew money!” At the time, these narratives seemed to come out of nowhere. Later, it emerged that Facebook, in a panic about its PR crisis, had hired the secret communications firm Definers Public Affairs, which subsequently leaked out fake narratives filled with anti-Semitic tropes about its critics being part of a George Soros–funded conspiracy. Rumors were seeded on the Internet and, as I discovered personally, its targets took it as a cue to take matters into their own hands.
* * *
—
IN FEBRUARY 2013, a Russian military general named Valery Gerasimov wrote an article challenging the prevailing notions of warfare. Gerasimov, who was Russia’s chief of the general staff (roughly equivalent to chairman of the U.S. Joint Chiefs of Staff), penned his thoughts in the Military-Industrial Kurier under the title “The Value of Science Is in the Foresight”—a set of ideas that some would later dub the Gerasimov Doctrine. Gerasimov wrote that the “ ‘rules of war’ have changed” and that “the role of nonmilitary means of achieving political and strategic goals has grown.” He addressed the uses of artificial intelligence and information in warfare: “The information space,” he wrote, “opens wide asymmetrical possibilities for reducing the fighting potential of the enemy.” Essentially, Gerasimov took the lessons of the Arab Spring uprisings, which were propelled by information sharing on social media, and urged military strategists to adapt them. “It would be easiest of all to say that the events of the ‘Arab Spring’ are not war, and so there are no lessons for us—military men—to learn. But maybe the opposite is true—that precisely these events are typical of warfare in the twenty-first century.”
Gerasimov’s article was followed by another Russian military strategy paper, this one written by Colonel S. G. Chekinov and Lieutenant General S. A. Bogdanov. Their paper took Gerasimov’s idea even further: The authors wrote that it would be possible to attack an adversary by “obtain[ing] information to engage in propaganda from servers of the Facebook and [T]witter public networks” and that, with these “powerful information technologies at its disposal, the aggressor will make an effort to involve all public institutions in the country it intends to attack, primarily the mass media and religious organizations, cultural institutions, non-governmental organizations, public movements financed from abroad, and scholars engaged in research on foreign grants.” At the time, it was a radical new idea. Read today, it is a precise blueprint for Russia’s interference in the 2016 election.
The history of warfare is the history of new inventions and strategies, many of which were born out of necessity. By most metrics, Russia’s military is significantly weaker than that of the United States. The U.S. military budget, at $716 billion, is more than ten times that of Russia. The United States has 1.28 million active military personnel, as compared with Russia’s 1 million; has more than 13,000 total aircraft, as compared with Russia’s 4,000; and has twenty aircraft carriers, whereas Russia has one. By all existing conventional measures, Moscow would never again be competitive with the United States in terms of “great powers” warfare, and Vladimir Putin knew it. So the Russians had to devise another way to regain the advantage—one that had nothing to do with the physical battlespace.
It’s difficult for military strategists to envision new forms of battle when they’re focused on those at hand. Before the advent of flight, military commanders cared only about how to wage combat on land or at sea. It wasn’t until 1915, when the French pilot Roland Garros flew a plane jerry-rigged with a machine gun, that military strategists realized that war could actually be waged from the skies. Then, once aircraft began engaging in attacks, army units on the ground pivoted as well, creating compact, rapid-fire antiaircraft guns. And so the evolution of war continued.
Information warfare has evolved in similar fashion. At first, no one could have imagined that Facebook or Twitter could be battlefield tools; warfare was waged on the ground, in the air, at sea, and potentially in space. But the fifth domain—cyberspace�
��has proved to be a fruitful battleground for those who had the imagination and foresight to envision using social media for information warfare. You can draw a straight line from the groundwork laid by Gerasimov, Chekinov, and Bogdanov, right through the actions of Cambridge Analytica, to the victories of the Brexit and Trump campaigns. In only five or so years, the Russian military and state have managed to develop the first devastatingly effective new weapon of the twenty-first century.
They knew it would work, because companies such as Facebook would never take the “un-American” step of reining in their users. So Russia didn’t have to disseminate propaganda. They could just get the Americans to do it themselves, by clicking, liking, and sharing. Americans on Facebook did the Russians’ work for them, laundering their propaganda through the First Amendment.
But this new era of scaled disinformation is not confined to the realm of politics. Companies like Starbucks, Nike, and other fashion brands have found themselves targets of Russian-sponsored disinformation operations. When brands make statements that wade into existing social or racial tensions, there have been several identified instances in which Russian-sponsored fake news sites, botnets, and social media operations have activated to weaponize these narratives and provoke social conflict. In August 2016, the football player Colin Kaepernick refused to stand for the American national anthem to protest systemic racism and police brutality toward African Americans and other minorities in the United States. The fashion brand Nike, Kaepernick’s sponsor, stood behind the athlete, and a controversy ensued about Nike’s response. But unknown to many at the time, Russian-linked social media accounts began to spread and amplify existing hashtags promoting a Nike boycott within hours of the scandal emerging. Some of this Russian-amplified content eventually made it into mainstream news, which helped legitimize the Nike boycott narrative as a purely homegrown protest. Cybersecurity firms also identified fake Nike coupons originating from alt-right groups that targeted African American social media users with offers like “75% off all shoes for people of color.” The coupons were intended to create scenarios in which unwitting African American customers would try to use the coupons in a Nike store, where they would be refused. In the age of viral videos, this scenario could in turn create “real” footage showcasing a racist trope of an “angry black man” demanding free stuff in a store. So why would these disinformation operations target a fashion company and attempt to weaponize its brand? Because the objective of this hostile propaganda is not simply to interfere with our politics, or even to damage our companies. The objective is to tear apart our social fabric. They want us to hate one another. And that division can hit so much harder when these narratives contaminate the things we care about in our everyday lives—the clothes we wear, the sports we watch, the music we listen to, or the even coffee we drink.
We are all vulnerable to manipulation. We make judgments based on the information available to us, but we are all susceptible to manipulation when our access to that information becomes mediated. Over time, our biases can become amplified without our even realizing it. Many of us forget that what we see in our newsfeeds and our search engines is already moderated by algorithms whose sole motivation is to select what will engage us, not inform us. With most reputable news sources now behind paywalls, we are already seeing information inch toward becoming a luxury product in a marketplace where fake news is always free.
In the last economic revolution, industrial capitalism sought to exploit the natural world around us. It is only with the advent of climate change that we are now coming to terms with its ecological externalities. But in this next iteration of capitalism, the raw materials are no longer oil or minerals but rather commodified attention and behavior. In this new economy of surveillance capitalism, we are the raw materials. What this means is that there is a new economic incentive to create substantial informational asymmetries between platforms and users. In order to be able to convert user behavior into profit, platforms need to know everything about their users’ behavior, while their users know nothing of the platform’s behavior. As Cambridge Analytica discovered, this becomes the perfect environment to incubate propaganda.
With the advent of home automation hubs such as Amazon Alexa and Google Home, we are seeing the first step toward the eventual integration of cyberspace with our temporal physical reality. Fifth-generation (5G) mobile and next-generation Wi-Fi are already being rolled out, laying the foundations for the “Internet of Things” (IoT) to become the new norm, where household appliances big and small will become connected to high-speed and ubiquitous Internet networks. These mundane devices, whether they are a refrigerator, a toothbrush, or a mirror, are envisaged to use sensors to begin tracking users’ behavior inside their own homes, relaying the data back to service providers. Amazon, Google, and Facebook have already applied for patents to create “networked homes” that integrate in-home IoT sensors with online marketplaces, ad networks, and social profiles. In this future, Amazon will know when you pop an aspirin, and Facebook will watch your kids play in the living room.
Fully integrated with intelligent information networks, this new environment will be able to watch us, think about us, judge us, and seek to influence us by mediating our access to information—where “it” can see us, but we cannot see “it.” For the first time in human history, we will immerse ourselves in motivated spaces influenced by these silicon spirits of our making. No longer will our environment be passive or benign; it will have intentions, opinions, and agendas. No longer will our homes be a sanctuary from the outside world, for an ambient presence will persist throughout each connected room. We are creating a future where our homes will think about us. Where our cars and offices will judge us. Where doors become the doormen. Where we have created the demons and angels of the future.
This is the dream that Silicon Valley has for us all—to surround us at every minute and everywhere. In Cambridge Analytica’s quest for informational dominance, it was never going to be satisfied with just social data sets and had already begun to build relationships with satellite and digital TV providers. After tapping into connected televisions, Cambridge Analytica planned to find a way to integrate with sensors and smart devices in people’s homes. Imagine a future where a company like Cambridge Analytica could edit your television, talk to your children, and whisper to you in your sleep.
* * *
—
THE FOUNDATION OF OUR legal system is contingent upon the notion that our environment is passive and inanimate. The world surrounding us may passively influence our decisions, but such influence is not motivated. Nature or the heavens do not choose to influence us. Over centuries, the law has developed several fundamental presumptions about human nature. The most important of these is the notion of human agency as an irrefutable presumption in the law—that humans have the capacity to make rational and independent choices on their own accord. It follows that the world does not make decisions for humans, but that humans make decisions inside of that world.
This notion of human agency serves as the philosophical basis for criminal culpability, and we punish transgressors of the law on the grounds that they made a condemnable choice. A burning building may indeed harm people, but the law does not punish that building, as it has no agency. And so human laws regulate human acts, and not the motivations or behaviors of their surroundings. The corollaries to this are the fundamental rights we have. During the Enlightenment, the fundamental rights of people were articulated as core entitlements to protect the exercise of human agency. The rights to life, liberty, association, speech, vote, and conscience are all underpinned with a presumption of agency, as they are outputs of that agency. But agency itself has not been articulated as a right per se, as it has always been presumed to exist simply by virtue of our personhood. As such, we do not have an express right to agency that is contra mundum—that is, a right to agency that is exercisable against the environment itself. We do not have a right ag
ainst the heavens or the undue influence of motivated and thinking spaces to mediate the exercise of our agency. At the time of America’s founding, a situation where our agency could be manipulated by a motivated and thinking environment was never contemplated as a possibility. For the Founding Fathers, this would have been a power known only to God.
We can already see how algorithms competing to maximize our attention have the capacity to not only transform cultures but redefine the experience of existence. Algorithmically reinforced “engagement” lies at the heart of our outrage politics, call-out culture, selfie-induced vanity, tech addiction, and eroding mental well-being. Targeted users are soaked in content to keep them clicking. We like to think of ourselves as immune from influence or our cognitive biases, because we want to feel like we are in control, but industries like alcohol, tobacco, fast food, and gaming all know we are creatures that are subject to cognitive and emotional vulnerabilities. And tech has caught on to this with its research into “user experience,” “gamification,” “growth hacking,” and “engagement” by activating ludic loops and reinforcement schedules in the same way slot machines do. So far, this gamification has been contained to social media and digital platforms, but what will happen as we further integrate our lives with networked information architectures designed to exploit evolutionary flaws in our cognition? Do we really want to live in a “gamified” environment that engineers our obsessions and plays with our lives as if we are inside its game?
The underlying ideology within social media is not to enhance choice or agency, but rather to narrow, filter, and reduce choice to benefit creators and advertisers. Social media herds the citizenry into surveilled spaces where the architects can track and classify them and use this understanding to influence their behavior. If democracy and capitalism are based on accessible information and free choice, what we are witnessing is their subversion from the inside.