The Age of Surveillance Capitalism

Home > Other > The Age of Surveillance Capitalism > Page 36
The Age of Surveillance Capitalism Page 36

by Shoshana Zuboff


  By 2017, exactly twenty years after the publication of Picard’s book, a leading market research firm forecast that the “affective computing market,” including software that recognizes speech, gesture, and facial expressions along with sensors, cameras, storage devices, and processors, would grow from $9.35 billion in 2015 to $53.98 billion in 2021, predicting a compounded annual growth rate of nearly 35 percent. What happened to cause this explosion? The report concludes that heading up the list of “triggers” for this dramatic growth is “rising demand for mapping human emotions especially by the marketing and advertising sector.…”107 Picard’s good intentions were like so many innocent iron filings in the presence of a magnet as the market demand exerted by the prediction imperative drew affective computing into surveillance capitalism’s powerful force field.

  Picard would eventually become part of this new dispossession industry with a company called Affectiva that was cofounded with Rana el Kaliouby, an MIT Media Lab postdoctoral research scientist and Picard protégé. The company’s transformation from doing good to doing surveillance capitalism is a metaphor for the fate of the larger undertaking of emotion analysis as it is rapidly drawn into the competitive maelstrom for surveillance revenues.

  Both Picard and Kaliouby had shared a vision of applying their research in medical and therapeutic settings. The challenges of autistic children seemed a perfect fit for their discoveries, so they trained a machine system called MindReader to recognize emotions using paid actors to mimic specific emotional responses and facial gestures. Early on, MIT Media Lab corporate sponsors Pepsi, Microsoft, Bank of America, Nokia, Toyota, Procter and Gamble, Gillette, Unilever, and others had overwhelmed the pair with queries about using their system to measure customers’ emotional responses. Kaliouby describes the women’s hesitation and their determination to focus on “do-good” applications. According to her account, the Media Lab encouraged the two women to “spin off” their work into the startup they called Affectiva, imagined as a “baby IBM for emotionally intelligent machines.”108

  It wasn’t long before the new company found itself fielding significant interest from ad agencies and marketing firms itching for automated rendition and analysis from the depths. Describing that time, as Picard told one journalist, “Our CEO was absolutely not comfortable with the medical space.” As a result, Picard was “pushed out” of the firm three years after its founding. As an Affectiva researcher recounted, “We began with a powerful set of products that could assist people who have a very difficult time with perceiving affect.… Then they started to emphasize only the face, to focus on advertisements, and on predicting whether someone likes a product, and just went totally off the original mission.”109

  Companies such as market research firm Millward Brown and advertising powerhouse McCann Erickson, competing in a new world of targeted “personalized” ads, already craved access to the inarticulate depths of the consumer response. Millward Brown had even formed a neuroscience unit but found it impossible to scale. It was Affectiva’s analysis of one particularly nuanced ad for Millward Brown that dazzled its executives and decisively turned the tide for the startup. “The software was telling us something we were potentially not seeing,” one Millward Brown executive said. “People often can’t articulate such detail in sixty seconds.”110

  By 2016, Kaliouby was the company’s CEO, redefining its business as “Emotion AI” and calling it “the next frontier of artificial intelligence.”111 The company had raised $34 million in venture capital, included 32 Fortune 100 companies and 1,400 brands from all over the world among its clients, and claimed to have the largest repository of emotion data in the world, with 4.8 million face videos from 75 countries, even as it continued to expand its supply routes with data sourced from online viewing, video game participation, driving, and conversation.112

  This is the commercial context in which Kaliouby came to feel that it is perfectly reasonable to assert that an “emotion chip” will become the base operational unit of a new “emotion economy.” She speaks to her audiences of a chip embedded in all things everywhere, running constantly in the background, producing an “emotion pulse” each time you check your phone: “I think in the future we’ll assume that every device just knows how to read your emotions.”113 At least one company, Emoshape, has taken her proposition seriously. The firm, whose tagline is “Life Is the Value,” produces a microchip that it calls “the industry’s first emotion synthesis engine,” delivering “high performance machine emotion awareness.” The company writes that its chip can classify twelve emotions with up to 98 percent accuracy, enabling its “artificial intelligence or robot to experience 64 trillion possible distinct emotional states.”114

  Kaliouby imagines that pervasive “emotion scanning” will come to be as taken for granted as a “cookie” planted in your computer to track your online browsing. After all, those cookies once stirred outrage, and now they inundate every online move. For example, she anticipates YouTube scanning its viewers’ emotions as they watch videos. Her confidence is buoyed by demand that originates in the prediction imperative: “The way I see it, it doesn’t matter that your Fitbit doesn’t have a camera, because your phone does, and your laptop does, and your TV will. All that data gets fused with biometrics from your wearable devices and builds an emotional profile for you.” As a start, Affectiva pioneered the notion of “emotion as a service,” offering its analytics on demand: “Just record people expressing emotion and then send those videos or images to us to get powerful emotion metrics back.”115

  The possibilities in the depth dimension seem endless, and perhaps they will be if Affectiva, its clients, and fellow travelers are free to plunder our selves at will. There are indications of more far-reaching ambitions in which “emotion as a service” expands from observation to modification. “Happiness as a service” seems to be within reach. “I do believe that if we have information about your emotional experiences we can help you be in a positive mood,” Kaliouby says. She imagines emotion-recognition systems issuing reward points for happiness because, after all, happy customers are more “engaged.”116

  IV. When They Come for My Truth

  Rendition is by now a global project of surveillance capital, and in the depth dimension we see it at its most pernicious. Intimate territories of the self, like personality and emotion, are claimed as observable behavior and coveted for their rich deposits of predictive surplus. Now the personal boundaries that shelter inner life are officially designated as bad for business by a new breed of mercenaries of the self determined to parse and package inner life for the sake of surveillance revenues. Their expertise disrupts the very notion of the autonomous individual by rewarding “boundarylessness” with whatever means are available—offers of elite status, bonuses, happiness points, discounts, “buy” buttons pushed to your device at the precise moment predicted for maximum success—so that we might strip and surrender to the pawing and prying of the machines that serve the new market cosmos.

  I want to deliberately sidestep a more detailed discussion of what is “personality” or “emotion,” “conscious” or “unconscious,” in favor of what I hope is a less fractious truth thrown into relief by this latest phase of incursion. Experience is not what is given to me but rather what I make of it. The same experience that I deride may invite your enthusiasm. The self is the inward space of lived experience from which such meanings are created. In that creation I stand on the foundation of personal freedom: the “foundation” because I cannot live without making sense of my experience.

  No matter how much is taken from me, this inward freedom to create meaning remains my ultimate sanctuary. Jean-Paul Sartre writes that “freedom is nothing but the existence of our will,” and he elaborates: “Actually it is not enough to will; it is necessary to will to will.”117 This rising up of the will to will is the inner act that secures us as autonomous beings who project choice into the world and exercise the qualities of self-determining moral judgment that
are civilization’s necessary and final bulwark. This is the sense behind another of Sartre’s insights: “Without bearings, stirred by a nameless anguish, the words labor.… The voice is born of a risk: either to lose oneself or win the right to speak in the first person.”118

  As the prediction imperative drives deeper into the self, the value of its surplus becomes irresistible, and cornering operations escalate. What happens to the right to speak in the first person from and as my self when the swelling frenzy of institutionalization set into motion by the prediction imperative is trained on cornering my sighs, blinks, and utterances on the way to my very thoughts as a means to others’ ends? It is no longer a matter of surveillance capital wringing surplus from what I search, buy, and browse. Surveillance capital wants more than my body’s coordinates in time and space. Now it violates the inner sanctum as machines and their algorithms decide the meaning of my breath and my eyes, my jaw muscles, the hitch in my voice, and the exclamation points that I offered in innocence and hope.

  What happens to my will to will myself into the first person when the surrounding market cosmos disguises itself as my mirror, shape-shifting according to what it has decided I feel or felt or will feel: ignoring, goading, chiding, cheering, or punishing me? Surveillance capital cannot keep from wanting all of me as deep and far as it can go. One firm that specializes in “human analytics” and affective computing has this headline for its marketing customers: “Get Closer to the Truth. Understand the ‘Why.’” What happens when they come for my “truth” uninvited and determined to march through my self, taking the bits and pieces that can nourish their machines to reach their objectives? Cornered in my self, there is no escape.119

  It appears that questions like these may have come to trouble Picard. In a 2016 lecture she gave in Germany titled “Towards Machines That Deny Their Maker,” the bland assertions of her 1997 book that “safeguards can be developed,” that additional technologies and techniques could solve any problem, and that “wearable computers” would “gather information strictly for your own use” as “tools of helpful empowerment and not of harmful subjugation”120 had given way to new reflections. “Some organizations want to sense human emotions without people knowing or consenting,” she said, “A few scientists want to build computers that are vastly superior to humans, capable of powers beyond reproducing their own kind… how might we make sure that new affective technologies make human lives better?”121

  Picard did not foresee the market forces that would transform the rendition of emotion into for-profit surplus: means to others’ ends. That her vision is made manifest in thousands of activities should be a triumph, but it is diminished by the fact that so many of those activities are now bound to the commercial surveillance project. Each failure to establish bearings contributes to habituation, normalization, and ultimately legitimation. Subordinated to the larger aims of surveillance capitalism, the thrust of the affective project changed as if distorted in a fun-house mirror.

  This cycle calls to mind the words of another MIT professor, the computer scientist and humanist Joseph Weizenbaum, who spoke eloquently and often on the inadvertent collusion of computer scientists in the construction of terrifying weapons systems. I believe he would have shaken his spear in the direction of today’s sometimes-unwitting and sometimes-intentional mercenaries of the self, and it is fitting to conclude here with his voice:

  I don’t quite know whether it is especially computer science or its sub-discipline Artificial Intelligence that has such an enormous affection for euphemism. We speak so spectacularly and so readily of computer systems that understand, that see, decide, make judgments… without ourselves recognizing our own superficiality and immeasurable naiveté with respect to these concepts. And, in the process of so speaking, we anesthetize our ability to… become conscious of its end use.… One can’t escape this state without asking, again and again: “What do I actually do? What is the final application and use of the products of my work?” and ultimately, “Am I content or ashamed to have contributed to this use?”122

  CHAPTER TEN

  MAKE THEM DANCE

  But hear the morning’s injured weeping and know why:

  Ramparts and souls have fallen; the will of the unjust

  Has never lacked an engine; still all princes must

  Employ the fairly-noble unifying lie.

  —W. H. AUDEN

  SONNETS FROM CHINA, XI

  I. Economies of Action

  “The new power is action,” a senior software engineer told me. “The intelligence of the internet of things means that sensors can also be actuators.” The director of software engineering for a company that is an important player in the “internet of things” added, “It’s no longer simply about ubiquitous computing. Now the real aim is ubiquitous intervention, action, and control. The real power is that now you can modify real-time actions in the real world. Connected smart sensors can register and analyze any kind of behavior and then actually figure out how to change it. Real-time analytics translate into real-time action.” The scientists and engineers I interviewed call this new capability “actuation,” and they describe it as the critical though largely undiscussed turning point in the evolution of the apparatus of ubiquity.

  This actuation capability defines a new phase of the prediction imperative that emphasizes economies of action. This phase represents the completion of the new means of behavior modification, a decisive and necessary evolution of the surveillance capitalist “means of production” toward a more complex, iterative, and muscular operational system. It is a critical achievement in the race to guaranteed outcomes. Under surveillance capitalism the objectives and operations of automated behavioral modification are designed and controlled by companies to meet their own revenue and growth objectives. As one senior engineer told me,

  Sensors are used to modify people’s behavior just as easily as they modify device behavior. There are many great things we can do with the internet of things, like lowering the heat in all the houses on your street so that the transformer is not overloaded, or optimizing an entire industrial operation. But at the individual level, it also means the power to take actions that can override what you are doing or even put you on a path you did not choose.

  The scientists and engineers whom I interviewed identified three key approaches to economies of action, each one aimed at achieving behavior modification. The first two I call “tuning” and “herding.” The third is already familiar as what behavioral psychologists refer to as “conditioning.” Strategies that produce economies of action vary according to the methods with which these approaches are combined and the salience of each.

  “Tuning” occurs in a variety of ways. It may involve subliminal cues designed to subtly shape the flow of behavior at the precise time and place for maximally efficient influence. Another kind of tuning involves what behavioral economists Richard Thaler and Cass Sunstein call the “nudge,” which they define as “any aspect of a choice architecture that alters people’s behavior in a predictable way.”1 The term choice architecture refers to the ways in which situations are already structured to channel attention and shape action. In some cases these architectures are intentionally designed to elicit specific behavior, such as a classroom in which all the seats face the teacher or an online business that requires you to click through many obscure pages in order to opt out of its tracking cookies. The use of this term is another way of saying in behaviorist language that social situations are always already thick with tuning interventions, most of which operate outside our awareness.

  Behavioral economists argue a worldview based on the notion that human mentation is frail and flawed, leading to irrational choices that fail to adequately consider the wider structure of alternatives. Thaler and Sunstein have encouraged governments to actively design nudges that adequately shepherd individual choice making toward outcomes that align with their interests, as perceived by experts. One classic example favored by Thaler and Sunstein is the c
afeteria manager who nudges students to healthier food choices by prominently displaying the fruit salad in front of the pudding; another is the automatic renewal of health insurance policies as a way of protecting individuals who overlook the need for new approvals at the end of each year.

  Surveillance capitalists adapted many of the highly contestable assumptions of behavioral economists as one cover story with which to legitimate their practical commitment to a unilateral commercial program of behavior modification. The twist here is that nudges are intended to encourage choices that accrue to the architect, not to the individual. The result is data scientists trained on economies of action who regard it as perfectly normal to master the art and science of the “digital nudge” for the sake of their company’s commercial interests. For example, the chief data scientist for a national drugstore chain described how his company designs automatic digital nudges that subtly push people toward the specific behaviors favored by the company: “You can make people do things with this technology. Even if it’s just 5% of people, you’ve made 5% of people do an action they otherwise wouldn’t have done, so to some extent there is an element of the user’s loss of self-control.”

  “Herding” is a second approach that relies on controlling key elements in a person’s immediate context. The uncontract is an example of a herding technique. Shutting down a car’s engine irreversibly changes the driver’s immediate context, herding her out the car door. Herding enables remote orchestration of the human situation, foreclosing action alternatives and thus moving behavior along a path of heightened probability that approximates certainty. “We are learning how to write the music, and then we let the music make them dance,” an “internet of things” software developer explains, adding,

 

‹ Prev