Privacy advocates protested the “always on” but “undetectable” recording of people and places that eliminates a person’s reasonable expectation of privacy and/or anonymity. They warned of new risks as facial-recognition software is applied to these new data streams and predicted that technologies like Glass would fundamentally alter how people behave in public. By May 2013, a congressional privacy caucus asked CEO Larry Page for assurances on privacy safeguards for Glass, even as a Google conference was held to coach developers on creating apps for the new device. In April 2014 Pew Research announced that 53 percent of Americans thought that smart wearables were “a change for the worse,” including 59 percent of American women.75
Google continued to tough it out, waiting for habituation to kick in. That June it announced that Glass would offer the Livestream video-sharing app, enabling Glass users to stream everything around them to the internet in real time. When asked about these controversial and intrusive capabilities in the hands of any owner of the device, Livestream’s CEO reckoned, “Google is ultimately in charge of… setting the rules.”76 Sergey Brin made it clear that any resistance would be categorically rejected when he told the Wall Street Journal, “People always have a natural aversion to innovation.”77
Adaptation began in 2015 with the announcement that Glass would no longer be available. The company said nothing to acknowledge the public’s revulsion or the social issues that Glass had raised. A short blog post announced, “Now we’re ready to put on our big kid shoes and learn how to run… you’ll start to see future versions of Glass when they’re ready.”78 An eyewear designer was tasked with transforming the look from a futuristic device to something more beautiful.
Redirection began quietly. In June 2015 the FCC’s Office of Engineering and Technology received new design plans for Glass, and September brought fresh headlines announcing that Glass “is getting a new name, and a new lease on life.”79 A year later, Eric Schmidt, now Google’s chairman, put the situation into perspective: “It is a big and very fundamental platform for Google.” He explained that Glass was withdrawn from public scrutiny only to “make it ready for users… these things take time.”80 As more information trickled out of the corporation, it became clear that there was no intention of ceding potential new supply routes in wearable technologies, no matter the public reaction. Glass was the harbinger of a new “wearables” platform that would help support the migration of behavioral surplus operations from the online to the offline world.81
In July 2017 the redirection phase went public with a blog post introducing a new iteration of Google Glass to the world, now as “Glass Enterprise Edition.”82 This time there would be no frontal attack on public space. Instead, it was to be a tactical retreat to the workplace—the gold standard of habituation contexts, where invasive technologies are normalized among captive populations of employees. “Workers in many fields, like manufacturing, logistics, field services, and healthcare find it useful to consult a wearable device for information and other resources while their hands are busy,” wrote the project’s leader, and most press accounts lauded the move, citing productivity and efficiency increases in factories that deployed the new Glass.83 There was little acknowledgment that habituation to Glass at work was most certainly a back door to Glass in our streets or that the intrusive surveillance properties of the device would, with equal certainty, be imposed on the women and men required to use them as a condition of their employment.
The lesson of Glass is that when one route to a supply source encounters obstacles, others are constructed to take up the slack and drive expansion. The corporation has begrudgingly learned to pay more attention to the public relations of these developments, but the unconditional demands of the extraction imperative mean that the dispossession cycle must proceed at full throttle, continuously claiming new territory.
Dispossession may be an act of “simple robbery” in theory, but in fact it is a complex, highly orchestrated political and material process that exhibits discernible stages and predictable dynamics. The theory of change exhibited here systematically transfers knowledge and rights from the many to the few in a glorious fog of Page’s “automagic.” It catalogues public contest as the unfortunate but predictable outcry of foolish populations who exhibit a knee-jerk “resistance to change,” wistfully clinging to an irretrievable past while denying an inevitable future: Google’s future, surveillance capitalism’s future. The theory indicates that opposition must simply be weathered as the signature of the first difficult phases of incursion. It assumes that opposition is fleeting, like the sharp yelp of pain when a Novocain needle first pierces the flesh, before numbness sets in.
V. Dispossession Competition
Google’s spectacular success in constructing the mechanisms and principles of surveillance capitalism and attracting surveillance revenues ignited competition in an escalating war of extraction. Google began in a blank space, but it would soon contend with other firms drawn to surveillance revenues. Facebook was the first and has remained the most aggressive competitor for behavioral surplus supplies, initiating a wave of incursions at high speed, establishing a presence on the free and lawless surplus frontier while denying its actions, repelling criticism, and thoroughly confusing the public. The “Like” button, introduced widely in April 2010 as a communications tool among friends, presented an early opportunity for Facebook’s Zuckerberg to master the dispossession cycle. By November of that year, a study of the incursion already underway was published by Dutch doctoral candidate and privacy researcher Arnold Roosendaal, who demonstrated that the button was a powerful supply mechanism from which behavioral surplus is continuously captured and transmitted, installing cookies in users’ computers whether or not they click the button. Presciently describing the operation as an “alternative business model,” Roosendaal discovered that the button also tracks non-Facebook members and concluded that Facebook was potentially able to connect with, and therefore surveil, “all web users.”84 Only two months earlier, Zuckerberg had characterized Facebook’s growing catalogue of privacy violations as “missteps.”85 Now he stuck to the script, eventually calling Roosendaal’s discovery a “bug.”86
By 2011 the habituation stage of the cycle was in full swing. A May Wall Street Journal report confirmed Facebook’s tracking, even when users don’t click the button, and noted that the button was already installed on one-third of the world’s one thousand most-visited websites. Meanwhile, Facebook’s Chief Technology Officer said of the button, “We don’t use them for tracking and they’re not intended for tracking.”87 On September 25, Australian hacker Nik Cubrilovic published findings showing that Facebook continued to track users even after they logged out of the site.88 Facebook announced that it would fix “the glitch,” explaining that certain cookies were tracking users in error, and noting that it could not cease the practice entirely due to “safety” and “performance” considerations.89 Journalists discovered that just three days before Cubrilovic’s revelations, the corporation received a patent on specialized techniques for tracking users across web domains. The new data methods enabled Facebook to track users, create personal profiles on individuals and their social networks, receive reports from third parties on each action of a Facebook user, and log those actions in the Facebook system in order to correlate them with specific ads served to specific individuals.90 The company immediately denied the relevance and importance of the patent.91
With Facebook’s unflinching assertions that it did not track users, even in the face of many robust facts, specialists grew more frustrated and the public more confused. This appears to have been the point. By denying every accusation and pledging its commitment to user well-being, Facebook secured a solid year and a half with which to habituate the world to its “Like” button, institutionalizing that iconic thumb turned toward the sky as an indispensable prosthetic of virtual communication.92
This solid achievement paved the way for the adaptation stage of the dispossession cycle, w
hen in late November 2011, Facebook consented to a settlement with the FTC over charges that it had systematically “deceived consumers by telling them that they could keep their Facebook information private, and then repeatedly allowing it to be shared and made public.”93 The complaint brought by EPIC and a coalition of privacy advocates in 2009 initiated an FTC investigation that yielded plenty of evidence of the corporation’s broken promises.94 These included website changes that made private information public, third-party access to users’ personal data, leakage of personal data to third-party apps, a “verified apps” program in which nothing was verified, enabling advertisers to access personal information, allowing access to personal data after accounts were deleted, and violations of the Safe Harbor Framework, which governs data transfers between the United States and the EU. In the parallel universe of surveillance capitalism, each one of these violations was worthy of a five-star rating from the extraction imperative. The FTC order barred the company from making further privacy misrepresentations, required users’ affirmative consent to new privacy policies, and mandated a comprehensive privacy program to be audited every two years for twenty years. FTC Chairman Jon Leibowitz insisted that “Facebook’s innovation does not have to come at the expense of consumer privacy.”95 But Leibowitz was not up against a company; he was up against a new market form with distinct and intractable imperatives whose mandates can be fulfilled only at the expense of user privacy.
Redirection came swiftly. In 2012 the company announced it would target ads based on mobile app use, as it worked with Datalogix to determine when online ads result in a real-world purchase. This gambit required mining personal information, including e-mail addresses, from user accounts. In 2012 Facebook also gave advertisers access to targeting data that included users’ e-mail addresses, phone numbers, and website visits, and it admitted that its system scans personal messages for links to third-party websites and automatically registers a “like” on the linked web page.96 By 2014, the corporation announced that it would be tracking users across the internet using, among its other digital widgets, the “Like” button, in order to build detailed profiles for personalized ad pitches. Its “comprehensive privacy program” advised users of this new tracking policy, reversing every assertion since April 2010 with a few lines inserted into a dense and lengthy terms-of-service agreement. No opt-out privacy option was offered.97 The truth was finally out: the bug was a feature.
Meanwhile, Google maintained the pledge that had been critical to the FTC’s approval of its 2001 acquisition of the ad-tracking behemoth DoubleClick when it agreed not to combine data from the tracking network with other personally identifiable information in the absence of a user’s opt-in consent. In this case, Google appears to have waited for Facebook to extend the surveillance capitalist frontier and bear the brunt of incursion and habituation. Later, in the summer of 2016, Google crossed that frontier with an announcement that a user’s DoubleClick browsing history “may be” combined with personally identifiable information from Gmail and other Google services. Its promised opt-in function for this new level of tracking was presented with the headline “Some new features for your Google account.” One privacy scholar characterized the move as the final blow to the last “tiny semblance” of privacy on the web. A coalition of privacy groups presented a new complaint to the FTC, implicitly recognizing the logic of the dispossession cycle: “Google has done incrementally and furtively what would plainly be illegal if done all at once.”98
Facebook’s IPO in 2012 was notoriously botched when last-minute downward revisions of its sales projections, precipitated by the rapid shift to mobile devices, led to some unsavory dealings among its investment bankers and their clients. But Zuckerberg, Sheryl Sandberg, and their team quickly mastered the nuances of the dispossession cycle, this time to steer the company toward mobile ads. They learned to be skilled and ruthless hunters of behavioral surplus, capturing supplies at scale, evading and resisting law, and upgrading the means of production to improve prediction products.
Surveillance revenues flowed fast and furiously, and the market lavishly rewarded the corporation’s shareholders. By 2017, the Financial Times hailed the company’s 71 percent earnings surge with the headline “Facebook: The Mark of Greatness” as Facebook’s market capitalization rose to just under $500 billion, with 2 billion average monthly active users. Facebook ranked seventh in one important tally of the top 100 companies in the first quarter of 2017, when just a year earlier it hadn’t figured anywhere in the top 100. Advertising, primarily mobile, accounted for nearly every dollar of the company’s revenue in the second quarter of 2017: $9.2 billion of a total $9.3 billion and a 47 percent increase over the prior year.99
The Guardian reported that Google and Facebook accounted for one-fifth of global ad spending in 2016, nearly double the figure of 2012, and by one accounting Google and Facebook owned almost 90 percent of the growth in advertising expenditures in 2016.100 Surveillance capitalism had propelled these corporations to a seemingly impregnable position.
Among the remaining three of the largest internet companies, Microsoft, Apple, and Amazon, it was Microsoft that first and most decisively turned toward surveillance capitalism as the means to restore its leadership in the tech sector, with the appointment of Satya Nadella to the role of CEO in February 2014. Microsoft had notoriously missed several key opportunities to compete with Google in the search business and develop its targeted advertising capabilities. As early as 2009, when Nadella was a senior vice president and manager of Microsoft’s search business, he publicly criticized the company’s failure to recognize the financial opportunities associated with that early phase of surveillance capitalism. “In retrospect,” he lamented, “it was a terrible decision” to end the search-ad service: “None of us saw the paid-search model in all its glory.” Nadella recognized then that Microsoft’s Bing search engine could not compete with Google because it lacked scale in behavioral surplus capture, the critical factor in the fabrication of high-quality prediction products: “When you look at search… it’s a game of scale. Clearly we don’t have sufficient scale today and that hinders… the quality of the ad relevance which is perhaps the bigger issue we have today.”101
Less than three months after assuming his new role, Nadella announced his intention to redirect the Microsoft ship straight into this game of scale with the April release of a study that the company had commissioned from market intelligence firm IDC.102 It concluded that “companies taking advantage of their data have the potential to raise an additional $1.6 trillion in revenue over companies that don’t,” and Nadella was determined to make landfall on the far shores of this rich new space. Microsoft would reap the advantages of its own data, and it would specialize in “empowering” its clients to do the same. Nadella composed a blog to signal the new direction, writing, “The opportunity we have in this new world is to find a way of catalyzing this data exhaust from ubiquitous computing and converting it into fuel for ambient intelligence.”103 As a video outlining the new “data vision” explains, “Data that was once untapped is now an asset.”
Many of Nadella’s initiatives aim to make up for lost time in establishing robust supply routes to surplus behavior and upgrading the company’s means of production. Bing’s search engineering team built its own model of the digital and physical world with a technology it calls Satori: a self-learning system that is adding 28,000 DVDs of content every day.104 According to the project’s senior director, “It’s mind-blowing how much data we have captured over the last couple of years. The line would extend to Venus and you would still have 7 trillion pixels left over.”105 All those pixels were being put to good use. In its October 2015 earnings call, the company announced that Bing had become profitable for the first time, thanks to around $1 billion in search ad revenue from the previous quarter.
Another strategy to enhance Bing’s access to behavioral surplus was the corporation’s “digital assistant,” Cortana, to which users addressed more
than one billion questions in the three months after its 2015 launch.106 As one Microsoft executive explains, “Four out of five queries go to Google in the browser. In the [Windows 10] task bar [where Cortana is accessed,] five out of five queries go to Bing.… We’re all in on search. Search is a key component to our monetization strategy.”107
Cortana generates more than search traffic. As Microsoft’s privacy policy explains, “Cortana works best when you sign in and let her use data from your device, your personal Microsoft account, other Microsoft services, and third-party services you choose to connect.”108 Like Page’s automagic, Cortana is intended to inspire awestruck and grateful surrender. One Microsoft executive characterizes Cortana’s message: “‘I know so much about you. I can help you in ways you don’t quite expect. I can see patterns that you can’t see.’ That’s the magic.”109
Nevertheless, the company made a canny decision not to disclose the true extent of Cortana’s knowledge to its users. It wants to know everything about you, but it does not want you to know how much it knows or that its operations are entirely geared to continuously learning more. Instead, the “bot” is programmed to ask for permission and confirmation. The idea is to avoid spooking the public by presenting Cortana’s intelligence as “progressive” rather than “autonomous,” according to the project’s group program manager, who noted that people do not want to be surprised by how much their phones are starting to take over: “We made an explicit decision to be a little less ‘magical’ and little more transparent.”110
The Age of Surveillance Capitalism Page 20