The Age of Surveillance Capitalism
Page 31
It is eloquent testimony to the health care system’s failure to serve the needs of second-modernity individuals that we now access health data and advice from our phones while these pocket computers aggressively access us. M-health has triggered an explosion of rendition and behavioral surplus capture as individuals turn in record numbers to their fitness bands and diet apps for support and guidance.47 By 2016, there were more than 100,000 mobile health apps available on the Google Android and Apple iOS platforms, double the number in 2014.48 These rich data can no longer be imagined as cloistered within the intimate closed loops between a patient and her doctor or between an application and its dieters or runners. That bucolic vision has its holdouts, to be sure, but for surveillance capitalists this vision is but a faded daguerreotype.
In the US, most health and fitness applications are not subject to health privacy laws, and the laws that do exist do not adequately take into account either new digital capabilities or the ferocity of surveillance capitalist operations. Companies are expected to self-regulate by following guidelines suggested by the Federal Trade Commission (FTC) and other government agencies. For example, in 2016 the FTC issued a list of best practices for developers of mobile health applications aimed at increasing transparency, privacy, and security. Among these suggestions, developers are encouraged to “make sure your app doesn’t access consumer information it doesn’t need,” “let consumers select particular contacts, rather than having your app request access to all user contacts through the standard API,” and let users “choose privacy-protective default settings.” That year the Food and Drug Administration announced that it would also not seek to regulate health and fitness apps, citing their “low-level risk.” Instead, the agency released its own set of voluntary guidelines for software developers.49
The agencies’ well-meaning guidelines overlook the inconvenient truth that transparency and privacy represent friction for surveillance capitalists in much the same way that improving working conditions, rejecting child labor, or shortening the working day represented friction for the early industrial capitalists. It took targeted laws to change working conditions back then, not suggestions. Then as now, the problems to which these pleas for self-restraint are addressed cannot be understood as excesses, mistakes, oversights, or lapses of judgment. They are necessitated by the reigning logic of accumulation and its relentless economic imperatives.
A legal review of mobile health apps concludes that most of them “take the consumers’ private information and data without the consumers’ permission and… do not generally disclose to the user that this information will be sent to advertising companies.” These conclusions are borne out by a long queue of studies,50 but let’s focus on a 2016 in-depth investigation by scholars from the Munk School of Global Affairs at the University of Toronto in association with Open Effect, a nonprofit focused on digital privacy and security. This study looked at the collection, processing, and usage activities associated with nine fitness trackers.51 Seven were chosen for their popularity, one was made by a Canadian company, and the ninth was an app that specialized in women’s health. All but two apps transmitted every logged fitness event to the company’s servers, which enabled backup and sharing with one’s friends but also “data analytics” and distribution to third parties. Some of the trackers transmitted device identification numbers; others passively and continuously transmitted the user’s precise longitude and latitude coordinates. These identifiers “could link fitness and biographical data to a single mobile phone hardware, or single specific fitness wearable.…” None of this sensitive information was necessary for the tracker to operate effectively, and most of the privacy policies were opaque at best and allowed data to be “sold or exchanged with third parties.” As we know, once a third party captures your surplus, it is shared with other third parties, who share with other third parties, and so on.
The team also examined the trackers’ transmission of the Bluetooth Media Access Controller or “MAC” address that is unique to each phone. When this address is publicly discoverable, any third party with an interest in your movements—retailers who want to know your mall activity, insurers concerned about your compliance with an exercise regime—can “persistently” track your phone. Multiple data sets logged over time can be combined to form a fine-grained picture of your movements, enabling targeted applications and heightening the probability of guaranteed outcomes. The only real protection is when an app randomly but regularly generates a new MAC address for your phone, but of the nine trackers, only Apple’s performed this operation.
The report also identifies a general pattern of careless security as well as the ability to generate false data. The researchers observed that consumers are likely to be misled and confused, overestimating the extent of security measures and underestimating “the breadth of personal data collected by fitness tracking companies.” As they concluded, “We discovered severe security vulnerabilities, incredibly sensitive geolocation transmissions that serve no apparent benefit to the end user, and… policies leaving the door open for the sale of users’ fitness data to third parties without express consent of the users.”
If you are inclined to dismiss this report because fitness trackers can be written off as toys, let’s consider a look at an incisive investigation into Android-based diabetes apps in a 2016 Journal of American Medicine research report and, with it, ample illustration of the frenzy of body rendition. The researchers note that although the FDA approved the prescription of a range of apps that transmit sensitive health data, the behind-the-scenes practices of these apps are “understudied.” They examined 211 diabetes apps and randomly sampled 65 of them for close analysis of data-transmission practices.52
Among these apps, merely downloading the software automatically “authorized collection and modification of sensitive information.” The researchers identified a great deal of backstage action, including apps that modify or delete your information (64 percent), read your phone status and identity (31 percent), gather location data (27 percent), view your Wi-Fi connections (12 percent), and activate your camera in order to access your photos and videos (11 percent). Between 4 percent and 6 percent of the apps went even further: reading your contact lists, calling phone numbers found in your device, modifying your contacts, reading your call log, and activating your microphone to record your speech.
Finally, the research team unearthed an even darker secret: privacy policies do not matter. Of the 211 apps in the group, 81 percent did not have privacy policies, but for those that did, “not all of the provisions actually protected privacy.” Of those apps without privacy policies, 76 percent shared sensitive information with third parties, and of those with privacy policies, 79 percent shared data while only about half admitted doing so in their published disclosures. In other words, privacy policies are more aptly referred to as surveillance policies, and that is what I suggest we call them.
There are many new territories of body rendition: organs, blood, eyes, brain waves, faces, gait, posture. Each of these expresses the same patterns and purpose that we have seen here. The surveillance capitalists relentlessly fight any attempts to constrain rendition. The ferocity with which they claim their “right to rendition” out of thin air is ample evidence of its foundational importance in the pursuit of surveillance revenues.
This ferocity is well illustrated in surveillance capitalists’ determination to discourage, eliminate, or weaken any laws governing the rendition of biometric information, especially facial recognition. Because there is no federal law in the US that regulates facial recognition, these battles occur at the state level. Currently, the Illinois Biometric Privacy Act offers the most comprehensive legal protections, requiring companies to obtain written consent before collecting biometric information from any individual and, among other stipulations, granting individuals the right to sue a company for unauthorized rendition.53
The Center for Public Integrity, along with journalists, privacy advocates, and legal sch
olars, has documented the active opposition of surveillance capitalists to the Illinois law and similar legislative proposals in other states. With its unique competitive advantages in facial recognition, Facebook is considered the most uncompromising of all the tech companies when it comes to biometric data, described as “working feverishly to prevent other states from enacting a law like the one in Illinois.”54
Facebook’s considerable political muscle had been cultivated in just a few years as it learned to emulate Google’s playbook of political and cultural fortifications. The company’s founder, Mark Zuckerberg, demonstrated an iron determination to preserve his freedom in lawless space, pushing the boundaries of existing regulations and vigorously opposing even the whisper of new law. Between 2009 and 2017, the company increased its lobbying spend by a factor of fifty, building “a massive lobbying entourage of Washington power brokers.” Facebook’s $4.6 million in donations during the 2016 election cycle complemented its lobbying budget of $11.5 million in 2017.55
Zuckerberg’s advantages in biometrics are significant. In 2017 Facebook boasted two billion monthly users uploading 350 million photos every day, a supply operation that the corporation’s own researchers refer to as “practically infinite.”56 In 2018 a Facebook research team announced that it had “closed the gap” and was now able to recognize faces “in the wild” with 97.35 percent accuracy, “closely approaching human-level performance.” The report highlights the corporation’s supply and manufacturing advantages, especially the use of “deep learning” based on “large training sets.”57 Facebook announced its eagerness to use facial recognition as a means to more powerful ad targeting, but even more of the uplift would come from the immense machine training opportunities represented by so many photos. By 2018, its machines were learning to discern activities, interests, mood, gaze, clothing, gait, hair, body type, and posture.58 The marketing possibilities are infinite.
It should not surprise any student of the prediction imperative that with these advantages in hand, Facebook is unwilling to accept anything less than total conquest in its bid to render faces for the sake of more-lucrative prediction products. So far, Facebook and its brethren have been successful, turning back legislative proposals in Montana, New Hampshire, Connecticut, and Alaska, and fatally weakening a bill that was passed in Washington state. Among the tech companies, only Facebook continued to oppose even the diminished terms of the Washington legislation.59
If rendition is interrupted, surveillance capitalism cannot stand, for the whole enterprise rests on this original sin. This fact is amply displayed in the public drama surrounding the ill-fated 2015 attempt to produce public guidelines on the creation and use of biometric information through a voluntary “privacy multi-stakeholder” process convened by the National Telecommunications and Information Association (NTIA) under the auspices of the US Department of Commerce. After weeks of negotiations, consumer advocates walked out in protest over the hard-line position of the tech companies and their lobbyists on the single most pivotal issue: consent.
The companies insisted on their right to use facial-recognition systems to identify a “stranger on the street” without first obtaining the individual’s consent. As one lobbyist in the talks told the press, “Everyone has the right to take photographs in public… if someone wants to apply facial recognition, should they really need to get consent in advance?” Privacy scholars were quick to respond that there is no lawfully established right to such actions, let alone a First Amendment right.60 Nobody reckoned with the fact that the prediction imperative makes individual ignorance the preferred condition for rendition operations, just as Arendt had observed and Mackay had prescribed for animals in the wild. Original sin prefers the dark.
The talks continued without the advocates, and in 2016 the NTIA issued its “Privacy Best Practice Recommendations for Commercial Facial Recognition Use.” The guidelines should be understood as the “best” for the surveillance capitalists but as the “worst” for everyone else. In the language of these guidelines, the tech companies, retailers, and others determined to chase surveillance revenues are simply “encouraged” to make their policies on facial recognition “available to consumers, in a reasonable manner.…” Where companies impose facial recognition on a physical location, they are “encouraged” to provide “notice” to consumers.61 Rendition operations are tacitly accorded legitimacy, not only in the lack of contest, but because they stand as the immovable facts draped in the cheap garlands of toothless “best practices.” Georgetown University legal scholar Alvaro Bedoya, a member of the advocacy group that quit the deliberations, blasted the recommendations as “a mockery of the Fair Information Practice Principles on which they claim to be grounded”; they offer “no real protection for individuals” and “cannot be taken seriously.”62
Under the regime of surveillance capitalism, individuals do not render their experience out of choice or obligation but rather out of ignorance and the dictatorship of no alternatives. The ubiquitous apparatus operates through coercion and stealth. Our advance into life necessarily takes us through the digital, where involuntary rendition has become an inescapable fact. We are left with few rights to know, or to decide who knows, or to decide who decides. This abnormal division of learning is created and sustained by secret fiat, implemented by invisible methods, and directed by companies bent to the economic imperatives of a strange new market form. Surveillance capitalists impose their will backstage, while the actors perform the stylized lullabies of disclosure and agreement for the public.
The prediction imperative transforms the things that we have into things that have us in order that it might render the range and richness of our world, our homes, and our bodies as behaving objects for its calculations and fabrications on the path to profit. The chronicles of rendition do not end here, however. Act II requires a journey from our living rooms and streets to another world below the surface, where inner life unfolds.
CHAPTER NINE
RENDITION FROM THE DEPTHS
I couldn’t feel, so I tried to touch…
—LEONARD COHEN
“HALLELUJAH”
I. Personalization as Conquest
Microsoft CEO Satya Nadella introduced Cortana, the corporation’s “personal digital assistant,” at the firm’s annual Ignite conference in 2016:
This new category of the personal digital assistant is a runtime, a new interface. It can take text input. It can take speech input. It knows you deeply. It knows your context, your family, your work. It knows the world. It is unbounded. In other words, it’s about you; it’s not about any one device. It goes wherever you go. It’s available on any phone—iOS, Android, Windows—doesn’t matter. It is available across all of the applications that you will use in your life.1
This is a new frontier of behavioral surplus where the dark data continent of your inner life—your intentions and motives, meanings and needs, preferences and desires, moods and emotions, personality and disposition, truth telling or deceit—is summoned into the light for others’ profit. The point is not to cure but to render all of it as immeasurably tiny bits of behavior available for calculation so that each can take its place on the assembly line that moves from raw materials to product development, manufacturing, and sales.
The machine invasion of human depth is prosecuted under the banner of “personalization,” a slogan that betrays the zest and cynicism brought to the grimy challenge of exploiting second-modernity needs and insecurities for outsize gain. From the point of view of the prediction imperative, personalization is a means of “individualizing” supply operations in order to secure a continuous flow of behavioral surplus from the depths. This process can be accomplished successfully only in the presence of our unrelenting hunger for recognition, appreciation, and most of all, support.
Recall that Hal Varian, Google’s chief economist, helped chart this course. “Personalization and customization” are the third “new use” of computer-mediated transactions. Instead of
having to ask Google questions, it should “know what you want and tell you before you ask the question.” Google Now, the corporation’s first digital assistant, was charged with this task. Varian warned that people would have to give Google even more of themselves in order to reap the value of the application: “Google Now has to know a lot about you and your environment to provide these services. This worries some people.” He rationalizes any concern, arguing that rendering personal information to Google is no different from sharing intimacies with doctors, lawyers, and accountants. “Why am I willing to share all this private information?” he asks. “Because I get something in return.… These digital assistants will be so useful that everyone will want one.” Varian is confident that the needs of second-modernity individuals will subvert any resistance to the rendition of personal experience as the quid pro quo for the promise of a less stressful and more effective life.2