Book Read Free

Don't Be Evil

Page 27

by Rana Foroohar


  In February 2018, the U.S. Department of Justice charged thirteen Russians and three companies with election manipulation, including spreading false and divisive content that helped propel Donald Trump to victory.15 And as Justice Department investigations have now shown, these entities used U.S. technology platforms, most notably Facebook, which was found to have accepted $100,000 in advertising from Russian actors, but also Instagram (owned by Facebook), Twitter, YouTube (owned by Google), and PayPal to execute their crimes.16

  And yet, these companies have almost without exception refused to accept responsibility for any of this. A leaked memo written by Facebook VP Andrew Bosworth in 2016 and published by BuzzFeed in 2018 gives a clue as to why: “We connect people. Period. That’s why all the work we do in growth [meaning, the privacy-compromising techniques] is justified. All the questionable contact importing practices. All the subtle language that helps people stay searchable by friends. All of the work we do to bring more communication in.” In another portion of the memo, Bosworth speculated about what could happen if the connections go bad. “Maybe it costs someone a life by exposing someone to bullies,” or “Maybe someone dies in a terrorist attack coordinated on our tools.”17 Apparently, the leadership felt that every possible negative externality was a price worth paying in service of Facebook’s higher mission of connecting the world.

  In September 2018, journalist Evan Osnos published a telling portrait of Zuck in The New Yorker, describing his deny-and-deflect attitude toward not only election manipulation, but the many other scandals that have embroiled Facebook: the breach of user-data agreements, the use of behavioral technology to knowingly manipulate children, the way in which authoritarian regimes like the Burmese junta have used the platform to orchestrate genocide.18 From the beginning to the end, Zuck’s attitude about all these crises was the same—nothing to see here.

  “The idea that fake news on Facebook—of which, you know, it’s a very small amount of content—influenced the election in any way, I think, is a pretty crazy idea,” he said in 2016. And even in the face of all the evidence to the contrary, in the summer of 2018, his stance had not shifted. “I find the notion that people would only vote some way because they were tricked to be almost viscerally offensive,” he told Osnos, in a statement that is truly stunning, given the company’s development and deployment of technologies that do just that.

  These problems did not come without warning. People in the Valley were openly fretting about them as early as 2011. That’s when Eli Pariser, the board president of the liberal political organization MoveOn.org, gave a TED Talk about how both Facebook and Google were using algorithms that encouraged people to migrate into political siloes populated only by those who thought as they did. The talk, entitled “Beware Online Filter Bubbles,”19 came out the same year that Google was introducing its own social network and vying with Facebook to create ever more detailed—that is to say, more valuable to advertisers—profiles of users’ online activity. All of the fears that Page and Brin had articulated in their 1998 paper about nefarious actors who might take advantage of Internet users for their own gain were coming true.

  Yet, if changing their business model compromised profits, then it was clear which road the platforms would choose. “It took me a very long time to accept that Zuck and Sheryl had fallen victim to overconfidence,” says McNamee.20 Even as evidence piled up, following the U.S. Senate’s election meddling report, that Facebook and other platforms had been used to spread disinformation and suppress votes, “I wanted to believe that [they] would eventually change their approach.”

  It wasn’t to be.

  Surveillance Capitalism on Steroids

  If election manipulation were the only way in which Big Tech was undermining democracy and civil liberty, it would be bad enough. But it’s not. Whether or not you voted in 2016, you are at risk of being targeted by the tools of a growing surveillance state in your daily life.

  In the 2002 film Minority Report, Tom Cruise played a policeman working in a specialized division in Virginia known as PreCrime that apprehended would-be criminals based on foreknowledge of their crimes provided by psychics. The mass surveillance and technology depicted in the movie—location-based personalized advertising, facial recognition, newspapers that updated themselves—are ubiquitous today. The only thing director Steven Spielberg got wrong was the need for psychics. Instead, law enforcement can turn to data and technologies provided by Google, Facebook, Amazon, and the intelligence group Palantir, who have become such big users of data tools that the reality of data-driven crime fighting in the United States has come to mirror dystopian science fiction.

  Facebook ad tools, for example, have been used to gather data on people who expressed an interest in Black Lives Matter, data of the sort that—as the ACLU has exposed—was then sold to police departments via a third-party data-monitoring and sales company called Geofeedia.21 This is far from unusual; the collection and sale of data from not only Big Tech firms but myriad other companies via third-party data brokers is a common practice—indeed, it’s the fastest growing part of the U.S. economy.22

  A 2019 report from the Democratic strategy group Future Majority found that “thousands of companies gather personal information that their customers or clients provide in the course of doing business with them, and then sell their information to large data brokers, such as credit bureaus.23 In turn, those data brokers analyze, package and resell the information, often as personal profiles. Their customers range from employers involved in hiring and companies planning marketing campaigns, to banks and mortgage lenders, colleges and universities, political campaigns and charities.” And, it should be noted, public entities like law enforcement and other government agencies.

  “In addition, credit card companies and healthcare data firms also routinely gather, analyze and profit from the personal information of their users.” Many of us might be surprised to learn this, considering the HIPAA restrictions on the sharing of healthcare data, and the fact that it’s difficult for even credit card users themselves to get access to their own credit scores and data. But, as the report notes, “those restrictions apply only to certain types of financial information—for example, personal bank balances, but not loan repayment data—and the requirements on healthcare information apply to healthcare providers but not to pharmacies or medical device producers.” That means that the online pharmacy owned by, say, Amazon, isn’t bound by such rules—nor is Fitbit, or any number of fitness apps that might be tracking your health information and physical movements.

  All of that “personal financial and health-related information can be gathered, analyzed and sold in anonymized forms, which algorithms can match to most people or simply generate detailed financial and health-related profiles based on the extensive information that internet platforms and data brokers have on everyone. Finally, the personal data now routinely used for commercial ends are not limited to the information that people reveal through their activities on internet platforms or through the goods and services they purchase. In addition, the Internet of Things has projected personal data gathering into many other aspects of people’s lives. For example, smart TVs collect, analyze and sell personal information on who owns them and what they watch. Smart cars and smartphones collect, analyze and sell personal information on who owns them and every place they go. Smart beds and smart fitness bands collect, analyze and sell information on who uses those products and their temperatures, heart rates and respiration. Further, the new generation of wifi-based home devices that respond to people’s voice commands—led by Amazon’s Alexa, Echo, and Dot, and Google Nest and Google Home—can capture not only personal information about the people who buy and install them but what they say in the range of those devices.”24

  In short, the American surveillance state isn’t science fiction—it’s already here.

  The fact that Silicon Valley companies portray themselves as
uber-liberal and praise groups like Black Lives Matter while also monetizing their surveillance is a rich and dark irony, but by no means the only one. Consider Amazon’s Orwellian-sounding Rekognition image processing system, which the ACLU recently called upon Jeff Bezos to stop selling to law enforcement officials, saying it was “primed for abuse in the hands of government.” The group argued that the system posed a particularly “grave threat to communities, including people of color and immigrants,” in a nod to studies that have shown that facial recognition software regularly misidentifies people of color.25

  But consider also that big data policing was first pushed in the United States several years ago in part as a response to racism and bias. Computation models in policing have been used for some time now; since 1994, a system known as CompStat, which linked crime and enforcement statistics, has been used by law enforcement officials in New York and then elsewhere. The attacks of 9/11 led to a new push for “intelligence-led policing,” which connected local and federal law enforcement agencies and their data.

  William Bratton, who had run the New York City police force, moved in 2002 to Los Angeles and brought with him “predictive policing” that aimed to use as much data, from as many sources as possible, to predict where crime might occur before it did. Data from myriad sources—crime reports, but also traffic monitoring, calls for public services, and information from the cameras now located all over L.A. and other major cities—could be used to build profiles on individuals. Police could then tag these profiles on a kind of RSS feed, which would give them information about what they were doing, in real time. The result? You might have a traffic violation one day, and depending on what the algorithm knew about you—where you went, what you did—you could be on a police watch list the next day.

  The idea was that policing by algorithm would help circumvent human cognitive biases, such as the conflation of blackness and criminality. And yet, the algorithms came with their own problems. Sarah Brayne, an academic at the University of Texas, has studied the use of big data within the Los Angeles Police Department, which has worked with Palantir (which helps collect and organize much of the data) to build predictive models of where crime might occur.26 She found that big data had fundamentally changed the nature of policing, making it less about reacting to crime and more about prediction and mass surveillance. The bottom line was that the merging of multiple data sources into Palantir’s models (just think of all the bits of data about yourself that we’ve already learned can be collected, collated, and sold, and then add in whatever the police authorities collected themselves) meant that people who had never had any encounters with police might very well end up under surveillance—something that is uncomfortably at odds with the principle of “innocent until proven guilty.” What started out as a way to make crime fighting more fair turned out to make it just the opposite.

  As Brayne put it in her paper, “This research highlights how data-driven surveillance practices may be implicated in the reproduction of inequality in at least three ways: by deepening the surveillance of individuals already under suspicion; widening the criminal justice dragnet unequally; and leading people to avoid ‘surveilling’ institutions that are fundamental to social integration.” She also noted, importantly, that “mathematized police practices serve to place individuals already under suspicion under new and deeper forms of surveillance, while appearing to be objective, or, in the words of one captain, ‘just math.’ ”

  Despite the stated intent of the system to avoid bias in police practices, it hides both intentional and unintentional bias in policing and creates a self-perpetuating cycle: Individuals under heightened surveillance have a greater likelihood of being stopped. Such practices work against individuals already in the criminal justice system, while obscuring the role of enforcement in shaping risk profiles that might actually trap them in the system. Moreover, individuals living in low-income, minority areas have a higher probability of their “risk” being quantified than those in more advantaged neighborhoods where the police are not conducting such surveillance.

  This is a big deal, not just because it’s racist, but also because it misses loads of nefarious activity; indeed, the two issues are tied together. The well-dressed insider trader sitting in his Upper East Side apartment may not trigger the surveillance algorithms, but his crime is far more important and more costly to society than that of a hoodie-wearing minor offender who’s had a traffic violation and is now suddenly trapped in a snowballing loop of surveillance policing. This type of social control, of course, “has consequences that reach beyond individuals.” The topic of “algoracism” is now a hot one, as activists and civil rights attorneys struggle to stay ahead of the way in which Big Tech has turned policing upside down, with potentially grave civil liberty implications for entire communities.

  As alarming as such shifts are, we are only at the beginning of the creation of a world in which everything we do and say, online and offline, can be watched, and used, both by Big Tech and the public sector itself. Consider the Alphabet Sidewalk Labs project in Toronto. The Google parent’s “urban innovation” arm, known as Sidewalk Labs, which works with local governments to place sensors and other technologies around cities (ostensibly to improve city services, but also, of course, to garner data for Google), is working on creating a “smart city” in the Canadian city. The high-tech neighborhood, which is being created from scratch along twelve acres of the city’s waterfront, will have sensors to detect noise and pollution, as well as heated driveways for smart cars. Robots will deliver mail through underground corridors, and all materials used in the city will be green.27

  Whether you find such an idea intriguing or creepy, the planning of the entire project has been opaque. Neither the city nor Google released all the details of the project immediately; rather, they’ve been leaked by investigative journalists. A February 2019 Toronto Star piece revealed that the plans for the smart city were much broader than the public had first thought: Google was actually planning to build its own mass transit line to the area, in exchange for a share of the property taxes, development fees, and increased land value that would ordinarily go into the city coffers.28 Think about that for a minute: One of the richest companies in the world is asking a city government, the sort of entity that it regularly petitions for better infrastructure, education, and services, to give up the money that would help it provide exactly that.

  Then there’s the question of who gets to keep the data. Sidewalk sensors would be able to track individuals everywhere they go—sitting on park benches, walking across the street, spending time with family members or lovers. Google has pledged to keep the data from all this “anonymous,” meaning not associated with any particular individual, and to put at least some of it into a data bank to be used to improve traffic flows and city services. But they haven’t pledged to keep it local—meaning that it could be used by Google in any of their operations.

  No wonder local protesters are now up in arms about the project as more details emerge. A sprawling “feedback wall” at the site offers visitors a chance to give answers to pre-written questions, such as “I’m not excited about…” One visitor had written “Surveillance state.” Another scrawled “Making Toronto Great Again.”29 Given the growing outrage, it will be interesting to see if Sidewalk Labs meets the same fate as Amazon HQ2.

  But if you think that’s creepy, consider Google’s Dragonfly search engine project. In August 2018, the Intercept, an investigative journalism website, reported that Google was considering working on a censored version of its search engine for China called Dragonfly.30 This came as an enormous shock not only to the general public, but to the vast majority of Google’s own workforce. The idea of helping the Chinese Communist Party keep unwanted information from its own people, and allowing the party to track any search results back to individuals via their phone numbers, seemed like the antithesis of “Don’t be evil.”

 
This was particularly true given Google’s history with the Middle Kingdom. The company had been in China once before, when Google.cn launched in 2006. Even back then, Google search was not allowed to guide Chinese people to certain information that the government deemed harmful, like, for example, the student-led Tiananmen Square protests of 1989 that led to the party opening fire on its own people and killing more than ten thousand.31 But the company decided that simply being in the country and helping such a huge population gain access to some search would help push the government toward greater openness.

  It was, in retrospect, a naïve assumption. In China, the only power is the Communist Party. And as is always the case when new technology comes into the Middle Kingdom, the party studied it, controlled it, and eventually bested it, by supporting a homegrown version of Google, called Baidu, which was given more freedom and ability to operate within the country, in exchange for more government control. By 2009, Google had only a third of the search market to Baidu’s 58 percent.32 A year later, Google decided to pull out of the Chinese market, after a hacking episode known as Operation Aurora, in which entities within China targeted Google’s intellectual property, its Gmail accounts, and, most important, the identities of human rights activists who had been using the platform. The idea of an autocratic government using the platform to spy on and potentially persecute activists finally forced the company out of China.

  China has hardened politically since then, and is today, under the regime of Xi Jinping, arguably as repressive as it’s been since the Mao era. The Dragonfly revelations, which the company at first denied and then tried to mitigate, made Washington furious, especially since the news broke around the same time that Google was leaving empty seats at Senate hearings about privacy and antitrust, and was also refusing to work with Pentagon officials on American artificial intelligence projects. U.S. vice president Mike Pence said that the project would “strengthen Communist Party censorship and compromise the privacy of Chinese customers.”

 

‹ Prev