Surveillance Valley
Page 18
Google’s first political challenge came from an unlikely source: California state senator Liz Figueroa, whose district spanned a huge swath of Silicon Valley and included Google HQ in Mountain View. Disturbed by Google’s email scanning, the senator introduced legislation to prohibit email providers from collecting personally identifying information unless they received explicit consent from all parties in an email conversation. Her office described it as a pioneering privacy law for the Internet age: “First-in-the-nation legislation would require Google to obtain the consent of every individual before their e-mail messages are scanned for targeted advertising purposes.
“Telling people that their most intimate and private e-mail thoughts to doctors, friends, lovers, and family members are just another direct marketing commodity isn’t the way to promote e-commerce,” Senator Figueroa explained, when she announced the bill on April 21, 2004. “At minimum, before someone’s most intimate and private thoughts are converted into a direct marketing opportunity for Google, Google should get everyone’s informed consent.”61
The proposed law sent Page and Brin into a panic. Just as the two were preparing to take the company public, they faced legislation that threatened their business model. Getting people’s consent—telling them upfront about the invasive way Google tracked them and their every move—was Page’s nightmare scenario of a public disclosure of the company’s data collection practices; it could trigger a public relations disaster and worse.
Google executives set up a war room to deal with the growing avalanche of criticism. Brin commanded the effort.62 He was furious at Google’s critics: they were ignorant; they did not understand the technology; they had no clue about anything. “Bastards, bastards!” he yelled.63 Page made personal calls to sympathetic tech journalists, explaining that there was no privacy problem and that Google didn’t really spy on users. He also organized a face-to-face meeting with Senator Figueroa and her chief of staff.64
“We walk into this room, and it’s myself and two of my staff—my chief of staff and one of my attorneys. And across from us was Larry, Sergey, and their attorney,” recounted the senator. Brin immediately launched into a lengthy explanation of the company’s privacy policies, arguing that Figueroa’s criticisms were baseless.
“Senator, how would you feel if a robot went into your home and read your diary and read your financial records, read your love letters, read everything, but before leaving the house, it imploded? That’s not violating privacy.”
“Of course it is,” she replied.
But Sergey persisted: “No, it isn’t. Nothing’s kept. Nobody knows about it.”
“That robot has read everything. Does that robot know if I’m sad or if I’m feeling fear, or what’s happening?” she answered, still defiant and unwilling to bend.
Brin looked directly at her and answered cryptically: “Oh, no. That robot knows a lot more than that.”
When Brin’s attempt to talk the senator down didn’t work, the company brought in a team of high-powered lobbyists and PR people to massage the message and restore Google’s righteous image. Leading the pack was Andrew McLaughlin, Google’s smooth and smiley chief public relations strategist who would later serve as President Barack Obama’s deputy chief technology officer. He knew exactly how to neutralize Senator Liz Figueroa: Al Gore. “I mobilized the Big Al,” he later bragged.65
After losing the 2000 presidential election to George Bush, Vice President Gore pivoted to a lucrative career as a tech venture capitalist. As part of that pivot, he accepted Google’s offer to be a “virtual board member,” meaning that from time to time he used his power and connections to resolve Google’s political problems. Now, at McLaughlin’s request, Gore summoned the prickly senator to his suites at the Ritz-Carlton in downtown San Francisco. There he gave her a stern talking to, lecturing her about algorithms and robotic analysis. “He was incredible,” recounted McLaughlin. “He stood up and was drawing charts and did this long analogy to the throw weight of the ICBM, the Minuteman missile.”66
Whatever he did in that room, it worked. Senator Figueroa dropped her opposition, and the first legal challenge to Google’s surveillance business model faded. And at least one journalist rejoiced: “The only population likely not to be delighted by Gmail are those still uncomfortable with those computer-generated ads. Those people are free to ignore or even bad-mouth Gmail, but they shouldn’t try to stop Google from offering Gmail to the rest of us,” declared New York Times technology journalist David Pogue in May. “We know a good thing when we see it.”67
A few months later, on August 19, 2004, Google went public. When the bell rang that afternoon to close NASDAQ trading, Google was worth $23 billion.68 Sergey Brin and Larry Page attained oligarch status in the space of a single workday, while hundreds of their employees became instant multimillionaires, including the company cook.
But concerns about Google’s business model would continue to haunt the company. Time proved Hoofnagle right. There wasn’t very much difference between Google’s approach and the surveillance technology deployed by the NSA, CIA, and Pentagon. Indeed, sometimes they were identical.
Minority Report
October 6, 2014. I’m at the office of UCLA professor Jeffrey Brantingham. It’s warm and sunny, and students lounge on the grass outside his windows. Inside, the two of us lean over his computer screen, inspecting an interactive crime map. He zooms in on Venice Beach.
“This used to be the heroin capital of LA. A lot of heroin trafficking going on here. You can see how it changes,” he says, toggling between day and night crime patterns for West Los Angeles. “Then, if you look farther afield in Pacific, you say what’s going on with some of these other places? Like in here. This is Playa Vista. Up here, Palms.”69
Brantingham, willowy and soft-spoken with a short gray beard and spiky gelled hair, is a professor of anthropology. He is also a cofounder of PredPol Inc., a hot new predictive policing start-up that came out of counterinsurgency research funded by the Pentagon to predict and prevent attacks on American soldiers in Iraq.70 In 2012, the researchers worked with the Los Angeles Police Department to apply their algorithmic modeling to predicting crime. Thus, PredPol was born.
The company’s name evokes Philip K. Dick’s Minority Report, but the company itself boasts a spectacular success rate: cutting crime by up to 25 percent in at least one city that deployed it.71 It works by ingesting decades of crime data, combining them with data about the local environment—factors such as the location of liquor stores, schools, highway on-ramps—and then running all the variables through a proprietary algorithm that generates hotspots where criminals are most likely to strike next.
“It was adapted and modified from something that was predicting earthquakes,” Brantingham explains as we sip coffee. “If you think about L.A. and earthquakes, for any given earthquake that happens, you can actually assign where that comes from in a causal sense quite well. After an earthquake happens on one of these faults, you get aftershocks, which occur nearby to where the main shock was and close in time.
“Crime is exactly the same,” he continues. “Our environment has lots of built features that are crime generators that are not going anywhere. A great example is a high school. High schools are not going anywhere for the most part. It is a built feature of the environment. And what do high schools have? Lots of young men aged fifteen to seventeen or fifteen to eighteen, and no matter where you go on the planet, young men ages fifteen to seventeen get into trouble. They do. It will always be that way, because of testosterone or girls or whatever it is. It’s our primate heritage.”
I scratch my head, nodding in agreement. It still doesn’t make much sense to me. Surely, one has to account for the fact that humans have free will. Surely, they would resist being treated like giant slabs of floating lava rock violently rubbing against one another? Weren’t there deeper social and political causes of crime beyond simple infrastructure—things like poverty and drug addiction? On the topic o
f high schools and kids being kids, shouldn’t there be other ways of dealing with teenage troublemakers than criminalization and concentrated policing?
Brantingham counters that PredPol isn’t trying to fix society, just help cops prevent crime. “PredPol is not about fighting the root causes of crime,” he says. “PredPol is all about getting that officer the tool to make it harder for that crime to occur, and not about saying we don’t need to fix meth addiction. We do need to fix meth addiction.” In short: someone else has to do the hard work of improving society by dealing with root social and economic causes of crime. PredPol is simply in the business of helping cops more efficiently contain the mess that exists today.
In 2014, PredPol was one of many companies competing for a fledgling but rapidly expanding market in predictive policing technologies.72 Big, established companies like IBM, LexisNexis, and Palantir all offered predictive crime products.73 PredPol, though small, has raked in contracts with police departments across the country: Los Angeles; Orange County in central Florida; Reading, Pennsylvania; Tacoma, Washington. Local newspapers and television stations loved PredPol’s story: the high-tech miracle cure cash-strapped police departments had been waiting for. It enabled law enforcement officers to reduce crime at low cost. With a price tag of $25,000 to $250,000 a year, depending on a city’s population, PredPol seemed like a bargain.
Predictive policing was young, but already it was criticized by activists and social scientists who saw it as a rebranding of the age-old tactic of racial and economic profiling spiffed up with an objective, data-driven sheen.74 Wealthy areas and individuals never seemed to be targeted for predictive policing, nor did the technique focus on white-collar criminals. Journalists and criminologists blasted PredPol, in particular for making claims that it simply could not back up.75
Despite these knocks, PredPol had supporters and backers in Silicon Valley. Its board of directors and advisory board included serious heavy hitters: executives from Google, Facebook, Amazon, and eBay, as well as a former managing director of In-Q-Tel, the CIA venture capital outfit operating in Silicon Valley.76
Back in his office, Brantingham offers little about the company’s ties to these Internet giants. Another PredPol executive informed me that, behind the scenes, Google was one of PredPol’s biggest boosters and collaborators. “Google actually came to us,” Donnie Fowler, PredPol’s director of business development, told me by phone.77 “This is not the case of a little, tiny company going to a big behemoth like Google and saying that the only way we’ll survive is if we piggyback on you. It is a very mutually beneficial relationship.”
He bragged that, unlike other companies, PredPol did more than simply license Google’s technology to render the mapping system embedded in its product, but also worked with Google to develop customized functionality, including “building additional bells and whistles and even additional tools for law enforcement.” He was straightforward about why Google was so proactive about working with his company. “Their last frontier is to sell their technology to governments. They’ve done consumers. They’ve done business.” And PredPol was a perfect sales prop—a powerful example of police departments leveraging Google technology to keep people safe. “One of those Google guys told me: ‘You complete us,’” Fowler said with an air of satisfaction.
Cops? Government contractors? Data-driven counterinsurgency technology? Crime prediction powered by a ubiquitous Internet platform? Was he talking about Google? Or was it one of those Cold War cybernetic counterinsurgency systems the Pentagon dreamed about for so long? Was there a difference?
I shake Brantingham’s hand and leave his office. As I walk across UCLA’s campus to my car, I think about our conversation. Based on what I have already found investigating Silicon Valley’s private surveillance business, I am not that surprised to learn that Google is in bed with a crime prediction start-up spun off from counterinsurgency research.
The Internet has come a long way since Larry Page and Sergey Brin converted Google from a Stanford PhD project to a multi-billion-dollar company. But in a lot of ways it hasn’t changed much from its ARPANET days. It’s just gotten more powerful.
Development on the consumer front was the most dramatic. The commercial Internet we know today formed in the early 1990s, when the National Science Foundation privatized the NSFNET. Within the space of two decades, the network grew from a simple data and telecommunications medium into a vast global internetwork of computers, smartphones, apps, fiber-optic cables, cellular networks, and warehouse data centers so large they could fit entire Manhattan neighborhoods inside them. Today, the Internet surrounds us. It mediates modern life. We read books and newspapers on the Internet; bank, shop, and play video games on the Internet. We talk on the phone, attend college, find jobs, flirt, work, listen to music and watch movies, make dentist appointments, and get psychological counseling on the Internet. Air conditioners, phones, watches, pet food dispensers, baby monitors, cars, refrigerators, televisions, light bulbs—they all connect to the Internet, too. The world’s poorest places may lack plumbing and electricity, but they, sure enough, have access to the Internet.
The Internet is like a giant, unseen blob that engulfs the modern world. There is no escape, and, as Page and Brin so astutely understood when they launched Google, everything that people do online leaves a trail of data. If saved and used correctly, these traces make up a gold mine of information full of insights into people on a personal level as well as a valuable read on macr0 cultural, economic, and political trends.
Google was the first Internet company to fully leverage this insight and build a business on the data people leave behind. But it wasn’t alone for long. Something in technology pushed other companies in the same direction. It happened just about everywhere, from the smallest app to the most sprawling platform.
Netflix monitored the films people watched to suggest other films but also to guide the licensing of content and the production of new shows.78 Angry Birds, the game out of Finland that went viral, grabbed data from people’s smartphones to build profiles, with data points like age, gender, household income, marital status, sexual orientation, ethnicity, and even political alignment, and to transmit them to third-party targeted advertising companies.79 Executives at Pandora, the music streaming service, built a new revenue stream by profiling their seventy-three million listeners, grabbing their political beliefs, ethnicity, income, and even parenting status, then selling the info to advertisers and political campaigns.80 Apple mined data on people’s devices—photos, emails, text messages, and locations—to help organize information and anticipate users’ needs. In its promotional materials, it touted this as a kind of digital personal assistant that could “make proactive suggestions for where you’re likely to go.”
Pierre Omidyar’s eBay, the world’s biggest online auction site, deployed specialized software that monitored user data and matched them with information available online to unmask fraudulent sellers.81 Jeff Bezos dreamed of building his online retailer Amazon into the “everything store,” a global sales platform that would anticipate users’ every need and desire and deliver products without being asked.82 To do that, Amazon deployed a system for monitoring and profiling. It recorded people’s shopping habits, their movie preferences, the books they were interested in, how fast they read books on their Kindles, and the highlights and margin notes they made. It also monitored its warehouse workers, tracking their movements and timing their performance.83 Amazon requires incredible processing power to run such a massive data business, a need that spawned a lucrative side business of renting out space on its massive servers to other companies. Today, the company is not just the world’s biggest retailer but also the world’s biggest Internet hosting company, bringing in $10 billion a year from storing other firms’ data.84
Facebook, which started out as a “hot or not” rating game at Harvard, grew into a global social media platform powered by a Google-like targeted advertising model. The company gobbled up every
thing its users did: posts, texts, photos, videos, likes and dislikes, friend requests accepted and rejected, family connections, marriages, divorces, locations, political views, and even deleted posts that had never been published. All of it was fed into Facebook’s secret profiling algorithm that turned the details of private lives into private commodities. The company’s ability to link people’s opinions, interests, and group and community affiliations made it a favorite of advertising and marketing firms of all kinds.
Political campaigns in particular loved the direct access Facebook offered. Instead of blanketing airwaves with a single political ad, they could use detailed behavioral profiles to micro-target their messaging, showing ads that appealed specifically to individuals and the issues they held dear. Facebook even allowed campaigns to upload lists of potential voters and supporters directly into the company’s data system, and then use those people’s social networks to extrapolate other people who might be supportive of a candidate.85 It was a powerful and profitable tool. A decade after Mark Zuckerberg transfigured the company from a Harvard project, 1.28 billion people worldwide used the platform daily, and Facebook minted $62 in revenue for every one of its users in America.86
Uber, the Internet taxi company, deployed data to evade government regulation and oversight in support of its aggressive expansion into cities where it operated illegally. To do this, the company developed a special tool that analyzed user credit card information, phone numbers, locations and movements, and the way that users used the app to identify whether or not they were police officers or government officials who might be hailing an Uber only to ticket drivers or impound their cars. If the profile was a match, these users were silently blacklisted from the app.87