The Age of Surveillance Capitalism

Home > Other > The Age of Surveillance Capitalism > Page 35
The Age of Surveillance Capitalism Page 35

by Shoshana Zuboff


  It was probably no coincidence that the leaked Facebook presentation appeared around the same time that a young Cambridge Analytica mastermind-turned-whistleblower, Chris Wylie, unleashed a torrent of information on that company’s secret efforts to predict and influence individual voting behavior, quickly riveting the world on the small political analytics firm and the giant source of its data: Facebook. There are many unanswered questions about the legality of Cambridge Analytica’s complex subterfuge, its actual political impact, and its relationship with Facebook. Our interest here is restricted to how its machinations shine a bright light on the power of surveillance capitalism’s mechanisms, especially the determination to render data from the depth dimension.

  Kosinski and Stillwell had called attention to the commercial value of their methods, understanding that surplus from the depths afforded new possibilities for behavioral manipulation and modification. Wylie recounts his fascination with this prospect, and, through a complicated chain of events, it was he who persuaded Cambridge Analytica to use Kosinski and Stillwell’s data to advance its owner’s political aims. The objective was “behavioral micro-targeting… influencing voters based not on their demographics but on their personalities.…”79 When negotiations with Kosinski and Stillwell broke down, a third Cambridge academic, Alexander Kogan, was hired to render a similar cache of Facebook personality data.

  Kogan was well-known to Facebook. He had collaborated with its data scientists on a 2013 project in which the company provided data on 57 billion “friendships.” This time, he paid approximately 270,000 people to take a personality quiz. Unknown to these participants, Kogan’s app enabled him to access their Facebook profiles and, on average, the profiles of about 160 of each of the test takers’ friends, “none of whom would have known or had reason to suspect” this invasion.80 It was a massive rendition operation from which Kogan successfully produced psychological profiles of somewhere between 50 and 87 million Facebook users, data that he then sold to Cambridge Analytica.81 When Facebook questioned him about his application, Kogan vowed that his research was solely for academic purposes. Indeed, mutual respect between the two parties was sufficiently robust that Facebook hired one of Kogan’s assistants to join its in-house team of research psychologists.82

  “We exploited Facebook to harvest millions of people’s profiles,” Wylie admitted, “and built models to exploit what we knew about them and target their inner demons.” His summary of Cambridge Analytica’s accomplishments is a précis of the surveillance capitalist project and a rationale for its determination to render from the depths. These are the very capabilities that have gathered force over the nearly two decades of surveillance capitalism’s incubation in lawless space. These practices produced outrage around the world, when in fact they are routine elements in the daily elaboration of surveillance capitalism’s methods and goals, both at Facebook and within other surveillance capitalist companies. Cambridge Analytica merely reoriented the surveillance capitalist machinery from commercial markets in behavioral futures toward guaranteed outcomes in the political sphere. It was Eric Schmidt, not Wylie, who first pried open this Pandora’s box, paving the way for the transfer of surveillance capitalism’s core mechanisms to the electoral process as he cemented the mutual affinities that produced surveillance exceptionalism. In fact, Wylie enjoyed his early training under Obama’s “director of targeting.”83 Schmidt’s now-weaponized innovations have become the envy of every political campaign and, more dangerously, of every enemy of democracy.84

  In addition to employing surveillance capitalism’s foundational mechanisms—rendition, behavioral surplus, machine intelligence, prediction products, economies of scale, scope, and action—Cambridge Analytica’s dark adventure also exemplifies surveillance capitalism’s tactical requirements. Its operations were designed to produce ignorance through secrecy and the careful evasion of individual awareness. Wylie calls this “information warfare,” correctly acknowledging the asymmetries of knowledge and power that are essential to the means of behavioral modification:

  I think it’s worse than bullying, because people don’t necessarily know it’s being done to them. At least bullying respects the agency of people because they know… if you do not respect the agency of people, anything that you’re doing after that point is not conducive to a democracy. And fundamentally, information warfare is not conducive to democracy.85

  This “warfare” and its structure of invasion and conquest represent surveillance capitalism’s standard operating procedures to which billions of innocents are subjected each day, as rendition operations violate all boundaries and modification operations claim dominion over all people. Surveillance capitalism imposes this quid pro quo of “agency” as the price of information and connection, continuously pushing the envelope of rendition to new frontiers. Along the way, companies such as Facebook and Google employ every useful foot soldier, including social scientists such as Kogan who are willing to put their shoulders to the wheel as they help the company learn, perfect, and integrate the cutting-edge methods that can conquer the next frontier, a phenomenon that we will visit in more depth in Chapter 10.

  Irrespective of Cambridge Analytica’s actual competence and its ultimate political impact, the plotting and planning behind its ambitions are testament to the pivotal role of rendition from the depths in the prediction and modification of behavior, always in pursuit of certainty. Billionaires such as Zuckerberg and Mercer have discovered that they can muscle their way to dominance of the division of learning in society by setting their sights on these rendition operations and the fortunes they tell. They aim to be unchallenged in their power to know, to decide who knows, and to decide who decides. The rendition of “personality” was an important milestone in this quest: a frontier, yes, but not the final frontier.

  III. Machine Emotion

  In 2015 an eight-year-old startup named Realeyes won a 3.6 million euro grant from the European Commission for a project code-named “SEWA: Automatic Sentiment Analysis in the Wild.” The aim was “to develop automated technology that will be able to read a person’s emotion when they view content and then establish how this relates to how much they liked the content.” The director of video at AOL International called the project “a huge leap forward in video ad tech” and “the Holy Grail of video marketing.”86 Just a year later, Realeyes won the commission’s Horizon 2020 innovation prize thanks to its “machine learning-based tools that help market researchers analyze the impact of their advertising and make it more relevant.”87

  The SEWA project is a window on a burgeoning new domain of rendition and behavioral surplus supply operations known as “affective computing,” “emotion analytics,” and “sentiment analysis.” The personalization project descends deeper toward the ocean floor with these new tools, where they lay claim to yet a new frontier of rendition trained not only on your personality but also on your emotional life. If this project of surplus from the depths is to succeed, then your unconscious—where feelings form before there are words to express them—must be recast as simply one more source of raw-material supply for machine rendition and analysis, all of it for the sake of more-perfect prediction. As a market research report on affective computing explains, “Knowing the real-time emotional state can help businesses to sell their product and thereby increase revenue.”88

  Emotion analytics products such as SEWA use specialized software to scour faces, voices, gestures, bodies, and brains, all of it captured by “biometric” and “depth” sensors, often in combination with imperceptibly small, “unobtrusive” cameras. This complex of machine intelligence is trained to isolate, capture, and render the most subtle and intimate behaviors, from an inadvertent blink to a jaw that slackens in surprise for a fraction of a second. Combinations of sensors and software can recognize and identify faces; estimate age, ethnicity, and gender; analyze gaze direction and blinks; and track distinct facial points to interpret “micro-expressions,” eye movements, emotions, moods, stre
ss, deceit, boredom, confusion, intentions, and more: all at the speed of life.89 As the SEWA project description says,

  Technologies that can robustly and accurately analyse human facial, vocal and verbal behaviour and interactions in the wild, as observed by omnipresent webcams in digital devices, would have profound impact on both basic sciences and the industrial sector. They… measure behaviour indicators that heretofore resisted measurement because they were too subtle or fleeting to be measured by the human eye and ear.… 90

  These behaviors also elude the conscious mind. The machines capture the nanosecond of disgust that precedes a rapid-fire sequence of anger, comprehension, and finally joy on the face of a young woman watching a few frames of film, when all she can think to say is “I liked it!” A Realeyes white paper explains that its webcams record people watching videos in their homes “so we can capture genuine reactions.” Algorithms process facial expressions, and “emotions are detected, aggregated, and reported online in real time, second by second… enabling our clients to make better business decisions.” Realeyes emphasizes its own “proprietary metrics” to help marketers “target audiences” and “predict performance.”91

  Once again, a key theme of machine intelligence is that quality is a function of quantity. Realeyes says that its data sets contain over 5.5 million individually annotated frames of more than 7,000 subjects from all over the world: “We are continuously working to build the world’s largest expression and behaviour datasets by increasing the quality and volume of our already-existing categories, and by creating new sets—for other expressions, emotions, different behavioral clues or different intensities.… Having automated this process, it can then be scaled up to simultaneously track the emotions of entire audiences.”92 Clients are advised to “play your audience emotions to stay on top of the game.”93 The company’s website offers a brief review of the history of research on human emotions, concluding that “the more people feel, the more they spend.… Intangible ‘emotions’ translate into concrete social activity, brand awareness, and profit.”94

  The chair of SEWA’s Industry Advisory Board is frank about this undertaking, observing that unlocking the meaning of “the non-spoken language of the whole body and interpreting complex emotional response… will be wonderful for interpreting reactions to marketing materials,” adding that “it is simply foolish not to take emotional response into account when evaluating all marketing materials.” Indeed, these “nonconscious tools” extract rarified new qualities of behavioral surplus from your inner life in order to predict what you will buy and the precise moment at which you are most vulnerable to a push. SEWA’s advisory chair says that emotional analytics are “like identifying individual musical notes.” Each potential customer, then, is a brief and knowable composition: “We will be able to identify chords of human response such as ‘liking,’ boredom, etc.… We will ultimately become masters of reading each other’s feelings and intent.”95

  This is not the first time that the unconscious mind has been targeted as an instrument of others’ aims. Propaganda and advertising have always been designed to appeal to unacknowledged fears and yearnings. These have relied more on art than science, using gross data or professional intuition for the purpose of mass communication.96 Those operations cannot be compared to the scientific application of today’s historic computational power to the micro-measured, continuous rendition of your more-or-less actual feelings. The new toolmakers do not intend to rob you of your inner life, only to surveil and exploit it. All they ask is to know more about you than you know about yourself.

  Although the treasures of the unconscious mind have been construed differently across the millennia—from spirit to soul to self—the ancient priest and the modern psychotherapist are united in an age-old reverence for its primal healing power through self-discovery, self-mastery, integration, restoration, and transcendence. In contrast, the conception of emotions as observable behavioral data first took root in the mid-1960s with the work of Paul Ekman, then a young professor at the University of California, San Francisco. From his earliest papers, Ekman argued that “actions speak louder than words.”97 Even when a person is determined to censor or control his or her emotional communications, Ekman postulated that some types of nonverbal behaviors “escape control and provide leakage.”98 Early on, he recognized the potential utility of a “categorical scheme” that reliably traced the effects of expression back to their causes in emotion,99 and in 1978 Ekman, along with frequent collaborator Wallace Friesen, published the seminal Facial Action Coding System (FACS), which provided that scheme.

  FACS distinguishes the elemental movements of facial muscles, breaking them down into twenty-seven facial “action units,” along with more for the head, eyes, tongue, and so on. Later, Ekman concluded that six “basic emotions” (anger, fear, sadness, enjoyment, disgust, and surprise) anchored the wider array of human emotional expression.100 FACS and the six-emotion model became the dominant paradigm for the study of facial expression and emotion, in much the same way that the five-factor model came to dominate studies of personality.

  The program of emotional rendition began innocently enough with MIT Media Lab professor Rosalind Picard and the new field of computer science that she called “affective computing.” She was among the first to recognize the opportunity for a computational system to automate the analysis of Ekman’s facial configurations and correlate micro-expressions with their emotional causality.101 She aimed to combine facial expression with the computation of vocal intonation and other physiological signals of emotion that could be measured as behavior. In 1997 she published Affective Computing, which posited a practical solution to the idea that some emotions are available to the conscious mind and can be expressed “cognitively” (“I feel scared”), whereas others may elude consciousness but nevertheless be expressed physically in beads of sweat, a widening of the eyes, or a nearly imperceptible tightening of the jaw.

  The key to affective computing, Picard argued, was to render both conscious and unconscious emotion as observable behavior for coding and calculation. A computer, she reasoned, would be able to render your emotions as behavioral information. Affect recognition, as she put it, is “a pattern recognition problem,” and “affect expression” is pattern synthesis. The proposition was that “computers can be given the ability to recognize emotions as well as a third-person human observer.”

  Picard imagined her affective insights being put to use in ways that were often good-hearted or, at the very least, benign. Most of the applications she describes conform to the logic of the Aware Home: any knowledge produced would belong to the subject to enhance his or her reflexive learning. For example, she envisioned a “computer-interviewing agent” that could function as an “affective mirror” coaching a student in preparation for a job interview or a date and an automatic agent that could alert you to hostile tones in your own prose before you press “send.” Picard anticipated other tools combining software and sensors that she believed could enhance daily life in a range of situations, such as helping autistic children develop emotional skills, providing software designers with feedback on users’ frustration levels, assigning points to video game players to reward courage or stress reduction, producing learning modules that stimulate curiosity and minimize anxiety, and analyzing emotional dynamics in a classroom. She imagined software agents that learn your preferences and find you the kinds of news stories, clothing, art, or music that make you smile.102 Whatever one’s reaction to these ideas might be, they share one key pattern: unlike SEWA’s model, Picard’s data were intended to be for you, not merely about you.

  Back in 1997, Picard acknowledged the need for privacy “so that you remain in control over who gets access to this information.” Importantly for our analysis, in the final pages of her book she expressed some concerns, writing that “there are good reasons not to broadcast your affective patterns to the world… you might flaunt your good mood in front of friends… you probably do not want i
t picked up by an army of sales people who are eager to exploit mood-based buying habits, or by advertisers eager to convince you that you’d feel better if you tried their new soft drink right now.” She noted the possibility of intrusive workplace monitoring, and she voiced reservations about the possibility of a dystopian future in which “malevolent” governmental forces use affective computing to manipulate and control the emotions of populations.103

  Despite these few paragraphs of apprehension, her conclusions were bland. Every technology arrives with its “pros and cons,” she wrote. The concerns are not “insurmountable” because “safeguards can be developed.” Picard was confident that technologies and techniques could solve any problem, and she imagined “wearable computers” that “gather information strictly for your own use.…” She stressed the importance of ensuring “that the wearer retains ultimate control over the devices he chooses to wear, so that they are tools of helpful empowerment and not of harmful subjugation.”104

  In a pattern that is by now all too familiar, safeguards lagged as surveillance capitalism flourished. By early 2014, Facebook had already applied for an “emotion detection” patent designed to implement each of Picard’s fears.105 The idea was “one or more software modules capable of detecting emotions, expressions or other characteristic of a user from image information.” As always, the company was ambitious. Its list of detectable emotions “by way of example and not limitation” included expressions such as “a smile, joy, humor, amazement, excitement, surprise, a frown, sadness, disappointment, confusion, jealousy, indifference, boredom, anger, depression, or pain.” The hope was that “over time” their module would be able to assess “a user’s interest in displayed content” for the purposes of “customization based on emotion type.”106

 

‹ Prev