Live Not by Lies

Home > Other > Live Not by Lies > Page 8
Live Not by Lies Page 8

by Rod Dreher


  Given that generations of American students have read Orwell’s novel, you would think they would be inoculated against accepting this kind of invasive technology.

  You would be wrong. In the twenty-first century, Big Brother has found a much more insidious way into our homes. In fact, he has been invited. Nearly 70 million Americans have one or more wireless “smart speakers”—usually manufactured by Amazon or Google—in their residences.4 Smart speakers are voice-recognition devices connected to the internet. They serve as digital assistants, recording vocal commands, and in response, executing actions—obtaining information, ordering retail goods, controlling lights and music, and so forth. For over 25 percent of the population, convenience has overcome privacy concerns.

  Consumerism is how we are learning to love Big Brother. What’s more, Big Brother is not exactly who we expected him to be—a political dictator, though one day he may become that. At the present moment, Big Brother’s primary occupation is capitalist. He’s a salesman, he’s a broker, he’s a gatherer of raw materials, and a manufacturer of desires. He is monitoring virtually every move you make to determine how to sell you more things, and in so doing, learning how to direct your behavior. In this way, Big Brother is laying the foundation for soft totalitarianism, both in terms of creating and implementing the technology for political and social control and by grooming the population to accept it as normal.

  This is the world of “surveillance capitalism,” a term coined by Shoshana Zuboff, a former Harvard Business School professor. In her 2019 book, The Age of Surveillance Capitalism, Zuboff describes and analyzes a new form of capitalism created by Google and perfected by Amazon and Facebook. Surveillance capitalism hoovers up detailed personal data about individuals and analyzes it with sophisticated algorithms to predict people’s behavior.

  The aim, obviously, is to pitch goods and services tailored to individual preferences. No surprise there—that’s merely advertising. The deeper realities of surveillance capitalism, however, are far more sinister. The masters of data aren’t simply trying to figure out what you like; they are now at work making you like what they want you to like, without their manipulation being detected.

  And they’re doing this without the knowledge or informed permission of the people whose lives they have colonized—and who are at present without means to escape the surveillance capitalists’ web. You may have given up Facebook over privacy concerns, and may have vowed never to have a smart device under your roof, but unless you are a hermit living off the grid, you are still thoroughly bounded and penetrated by the surveillance capitalist system.

  “This power to shape behavior for others’ profit or power is entirely self-authorizing,” Zuboff told The Guardian. “It has no foundation in democratic or moral legitimacy, as it usurps decision rights and erodes the processes of individual autonomy that are essential to the function of a democratic society. The message here is simple: Once I was mine. Now I am theirs [italics added].”5

  The story of surveillance capitalism begins in 2003, when Google, by far the world’s largest internet search engine, patented a process to allow it to use the vast amount of data it gathered from individual searches in a new way. The company’s data scientists had figured out how to utilize “data exhaust”—surplus information obtained from searches—to predict the kind of advertising that would most appeal to individual users.

  Before long, “data extraction” became the basis for a new tech-based economy. Google, Facebook, Amazon, and others discovered how to make fortunes by gathering, packaging, and selling personal data about individuals. By now, it is not a matter of vending your name, address, and email address to third parties. It is vastly more thorough. Web-connected sensors are reporting facts and data about you constantly.

  Consider this scenario: The alarm on your smartphone by your bedside buzzes you out of bed in the morning. While you were asleep, the apps on the phone uploaded the previous day’s information about your activities on it to the app owner. You crawl out of bed, brush your teeth, put on your shorts and sneakers, and take a twenty-minute run around your neighborhood. The Fitbit on your wrist records your workout information and uploads it.

  Back home, you shower, go into the kitchen to pour yourself a bowl of cereal, and sit down at the kitchen table to check you Gmail account, Facebook, and your favorite news and information sites. Everything you write on Gmail is processed by Google, which scans the text for key words to direct advertising to you. Everything you post, Like, or forward on Facebook is recorded by the company and used in its advertising. The company’s algorithms are so sophisticated now that Facebook can make detailed predictions about you just by associating certain data points. When you scan newspaper websites, cookies embedded in your browser report back about which stories you’ve read.

  As you drive to work, sensors in your car record and report your driving habits, because you allowed your car insurance company to capture this data in exchange for a lower rate for safe drivers. Meanwhile, the insurance company’s sensors record data about which stores you stop at, and then report all that back to the insurance company, which sells that data to marketers.

  All day long, the smartphone in your pocket sends data about its location—and therefore, yours—back to your service provider. You are trackable at all times—and disabling location services in your device is not foolproof. All the requests you make of Siri, your digital assistant? Recorded and monetized. All the Google searches during the day? Recorded and monetized. You go out for lunch and pay with your credit or debit card? Marketers know where you’ve eaten and match that data to your personal profile. Stop at the supermarket on the way home to pick up a few things and pay with the card? They know what you bought.

  Your smart refrigerator is sending data about your eating habits to someone. Your smart television is doing the same thing about what you’re watching. Your smart television will soon be watching you, literally. Zuboff reports on prizewinning research by a company called Realeyes that will use facial data recognition to make it possible for machines to analyze emotions using facial responses. When this technology becomes available, your smart TV (smartphone or laptop) will be able to monitor your involuntary response to commercials and other programming and report that information to outside sources. It doesn’t take a George Orwell to understand the danger posed by this all-but-inescapable technology.

  The Politics of Surveillance

  Why should corporations and institutions not use the information they harvest to manufacture consent to some beliefs and ideologies and to manipulate the public into rejecting others?

  In recent years, the most obvious interventions have come from social media companies deplatforming users for violating terms of service. Twitter and Facebook routinely boot users who violate its standards, such as promoting violence, sharing pornography, and the like. YouTube, which has two billion active users, has demonetized users who made money from their channels but who crossed the line with content YouTube deemed offensive. To be fair to these platform managers, there really are vile people who want to use these networks to advocate for evil things.

  But who decides what crosses the line? Facebook bans what it calls “expression that . . . has the potential to intimidate, exclude or silence others.” To call that a capacious definition is an understatement. Twitter boots users who “misgender” or “deadname” transgendered people. Calling Caitlyn Jenner “Bruce,” or using masculine pronouns when referring to the transgendered celebrity, is grounds for removal.

  To be sure, being kicked off of social media isn’t like being sent to Siberia. But companies like PayPal have used the guidance of the far-left Southern Poverty Law Center to make it impossible for certain right-of-center individuals and organizations—including the mainstream religious-liberty law advocates Alliance Defending Freedom—to use its services.6 Though the bank issued a general denial when asked, JPMorgan Chase has been credibly accused o
f closing the accounts of an activist it associates with the alt-right.7 In 2018, Citigroup and Bank of America announced plans to stop doing some business with gun manufacturers.8

  It is not at all difficult to imagine that banks, retailers, and service providers that have access to the kind of consumer data extracted by surveillance capitalists would decide to punish individuals affiliated with political, religious, or cultural groups those firms deem to be antisocial. Silicon Valley is well known to be far to the left on social and cultural issues, a veritable mecca of the cult of social justice. Social justice warriors are known for the spiteful disdain they hold for classically liberal values like free speech, freedom of association, and religious liberty. These are the kinds of people who will be making decisions about access to digital life and to commerce. The rising generation of corporate leaders takes pride in their progressive awareness and activism. Twenty-first-century capitalism is not only all in for surveillance, it is also very woke.

  Nor is it hard to foresee these powerful corporate interests using that data to manipulate individuals into thinking and acting in certain ways. Zuboff quotes an unnamed Silicon Valley bigwig saying, “Conditioning at scale is essential to the new science of massively engineered human behavior.” He believes that by close analysis of the behavior of app users, his company will eventually be able to “change how lots of people are making their day-to-day decisions.”9

  Maybe they will just try to steer users into buying certain products and not others. But what happens when the products are politicians or ideologies? And how will people know when they are being manipulated?

  If a corporation with access to private data decides that progress requires suppressing dissenting opinions, it will be easy to identify the dissidents, even if they have said not one word publicly.

  In fact, they may have their public voices muted. British writer Douglas Murray documented how Google quietly weights its search results to return more “diverse” findings. Though Google presents its search results as disinterested, Murray shows that “what is revealed is not a ‘fair’ view of things, but a view which severely skews history and presents it with a bias from the present.”10

  Result: for the search engine preferred by 90 percent of global internet users, “progress”—as defined by left-wing Westerners living in Silicon Valley—is presented as normative.

  In another all-too-common example, the populist Vox party in Spain had its Twitter access temporarily suspended when, in January 2020, a politician in the Socialist Party accused the Vox party of “hate speech,” for opposing the Socialist-led government’s plan to force schoolchildren to study gender ideology, even if parents did not consent.

  To be sure, Twitter, a San Francisco-based company with 330 million global users, especially among media and political elites, is not a publicly regulated utility; it is under no legal obligation to offer free speech to its users. But consider how it would affect everyday communications if social media and other online channels that most people have come to depend on—Twitter, Gmail, Facebook, and others—were to decide to cut off users whose religious or political views qualified them as bigots in the eyes of the digital commissars?

  What is holding the government back from doing the same thing? It’s not from a lack of technological capacity. In 2013, Edward Snowden, the renegade National Security Agency analyst, revealed that the US federal government’s spying was vastly greater than previously known. In his 2019 memoir, Permanent Record, Snowden writes of learning that

  the US government was developing the capacity of an eternal law-enforcement agency. At any time, the government could dig through the past communications of anyone it wanted to victimize in search of a crime (and everybody’s communications contain evidence of something). At any point, for all perpetuity, any new administration—any future rogue head of the NSA—could just show up to work and, as easily as flicking a switch, instantly track everybody with a phone or a computer, know who they were, where they were, what they were doing with whom, and what they had ever done in the past.11

  Snowden writes about a public speech that the Central Intelligence Agency’s chief technology officer, Gus Hunt, gave to a tech group in 2013 that caused barely a ripple. Only The Huffington Post covered it. In the speech, Hunt said, “It is really very nearly within our grasp to be able to compute on all human-generated information.” He added that after the CIA masters capturing that data, it intends to develop the capability of saving and analyzing it.12

  Understand what this means: your private digital life belongs to the State, and always will. For the time being, we have laws and practices that prevent the government from using that information against individuals, unless it suspects they are involved in terrorism, criminal activity, or espionage. But over and over, dissidents told me that the law is not a reliable refuge: if the government is determined to take you out, it will manufacture a crime from the data it has captured, or otherwise deploy it to destroy your reputation.

  Both the spread of the cult of social justice and the reach of surveillance capitalism into areas that the Orwellian tyrants of the communist bloc could only have aspired to have created an environment favorable to the emergence of soft totalitarianism. Under this Pink Police State scenario, powerful corporate and state actors will control populations by massaging them with digital velvet gloves, and by convincing them to surrender political liberties for security and convenience.

  China: The Mark of the East

  We don’t have to imagine the dystopian merging of commerce and political authoritarianism in a total surveillance state. It already exists in the People’s Republic of China. No doubt China’s totalitarianism has become far more sophisticated than the crude Sino-Stalinism practiced by its first leader, Mao Zedong. Even in a worst-case scenario, it is hard to imagine the United States becoming as ruthless as the state that has incarcerated a million of its Muslim citizens in concentration camps in an effort to destroy their cultural identity.13

  Nevertheless, China today proves that it is possible to have a wealthy, modern society and still be totalitarian. The techniques of social control that have become common in China could be adapted by America with relative ease. The fact that concentration camps in the American desert sound far-fetched should not keep us from understanding how much of China’s surveillance system could be quickly made useful to corporate and government controllers here.

  In the early 1980s, when Deng Xiaoping opened China to free-market reform, Western experts predicted that liberal democracy wouldn’t be far behind. They believed that free markets and free minds were inseparable. All the West had to do was sit back and watch capitalism free the liberal democrat deep inside China’s collective heart.

  Forty years later, China has become spectacularly rich and powerful, creating in a single generation a robust, colorful consumer society from a mass population that had known poverty and struggle since time immemorial. The Chinese Communist Party, which worked this miracle, not only maintains a secure grip on political power but also is turning the nation of 1.4 billion souls into the most advanced totalitarian society the world has ever known.

  Beijing’s use of consumer data, biometric information, GPS tracking coordinates, facial recognition, DNA, and other forms of data harvesting has turned, and continues to turn, China into a beast never before seen worldwide, not even under Mao or Stalin. In China, the tools of surveillance capitalism are employed by the surveillance state to administer the so-called social credit system, which determines who is allowed to buy, sell, and travel, based on their social behavior.

  “China is about to become something new: an AI-powered techno-totalitarian state,” writes journalist John Lanchester. “The project aims to form not only a new kind of state but a new kind of human being, one who has fully internalized the demands of the state and the completeness of its surveillance and control. That internalization is the goal: agencies of the state will never
need to intervene to correct the citizen’s behavior, because the citizen has done it for them in advance.”14

  He is talking about Beijing’s pioneering use of artificial intelligence and other forms of digital data gathering to create a state apparatus that not only monitors all citizens constantly but also can compel them to behave in ways the state demands without ever deploying the secret police or the threat of gulags (though those exist for the recalcitrant), and without suffering the widespread poverty that was the inevitable product of old-style communism.

  The great majority of Chinese pay for consumer goods and services using smartphone apps or their faces, via facial recognition technology. These provide consumer convenience and security, making life easier for ordinary people. They also generate an enormous amount of personal data about each Chinese individual, all of which the government tracks.

  The state has other uses for facial recognition technology. Television cameras are ubiquitous on Chinese streets, recording the daily comings and goings of the nation’s people. Beijing’s software is so advanced that it can easily check facial scans against the central security database. If a citizen enters an area forbidden to him—a church, say—or even if a person is merely walking in the opposite direction of a crowd, the system automatically records it and alerts the police.

  In theory, police don’t have to show up at the suspect’s door to make him pay for his disobedience. China’s social credit system automatically tracks the words and actions, online and off, of every Chinese citizen, and grants rewards or demerits based on obedience. A Chinese who does something socially positive—helping an elderly neighbor with a chore, or listening to a speech of leader Xi Jinping—receives points toward a higher social credit score. On the other hand, one who does something negative—letting his or her dog poop on the sidewalk, for example, or making a snarky comment on social media—suffers a social credit downgrade.

 

‹ Prev