Book Read Free

Dirty Work

Page 30

by Eyal Press


  The pushback from employees and heightened attention from the media eventually prompted Google to reconsider its priorities. In June 2018, Diane Greene, then the head of Google Cloud, informed the company’s employees that Google would not renew the Project Maven contract when it expired. A week later, Google unveiled a new set of AI principles, affirming that it would not pursue any surveillance projects that fell outside “internationally accepted norms.” On paper, this sounded like a principled position. To Laura, it sounded like a sly evasion, not least because there were no “internationally accepted norms” for surveillance projects. Far from signaling a repudiation of Project Maven, Google’s new AI principles left the door open for similar work in the future, Laura was convinced. Rather than wait for this to happen, she decided to exercise the same option that Jack Poulson did: she resigned, one of roughly twenty Google employees who left the company because of Project Maven. On her last day at the office, she cried, she told me. But she also felt a weight lift, having concluded that she could no longer work as a site engineer at Google and sleep well at night.

  “SPOILED IDENTITY”

  As I noted at the outset of this book, dirty work has a number of essential features. One of them is that it causes substantial harm to other people or to the natural world. Another is that it causes harm to the workers themselves, either by leading people to feel they have betrayed their own core values or by making them feel stigmatized and devalued by others. As Laura Nolan’s experience showed, holding a high-skill, high-paying white-collar job doesn’t necessarily prevent people from feeling they are betraying their core values. Had she continued working at Google, her guilt would have intensified, she told me. Eventually, it might have led her to experience a moral injury. The fact that, like Jack Poulson, she didn’t continue to work at Google showed, again, why it is so much easier for people in high-skill white-collar professions to avoid sustaining such wounds.

  But what about feeling stigmatized and devalued by others? Are white-collar workers more likely to be shielded from this as well? Not according to Thomas Roulet, a lecturer at the University of Cambridge Business School. In 2015, Roulet published an article in which he argued that such a fate befell an entire white-collar profession—the banking industry—after the 2008 financial crash. To build his case, Roulet drew on an influential work of social theory, Erving Goffman’s Stigma: Notes on the Management of Spoiled Identity. Published in 1963, Goffman’s book defined stigma as an “attribute that is deeply discrediting,” so much so that it “disqualified [a person] from full social acceptance.” The discrediting attribute could be a bodily sign. It could also be a character trait or an affiliation with a discredited racial or religious group—any “signifier” that veered from societal norms and led an individual to be placed in a separate and defiled category, “reduced in our minds from a whole and usual person to a tainted, discounted one.”

  Goffman applied this theory to the “moral careers” of individuals. But as Roulet noted, it could also be applied to organizations. “Like individuals, organizations are also subject to disqualification from full social acceptance,” he observed. After the 2008 financial meltdown, he argued, the financial industry qualified as such an organization, owing to the fact that the “dominant logic” of the profession—the theory of shareholder maximization, which held that a firm’s sole duty was to enrich its shareholders—came to be seen as incongruous with the broader norms of society and the common good. In the shadow of the subprime crisis, what investment bankers understood to be their jobs was depicted as reprehensible. The stigma that resulted from this disjuncture was diffused through the media, Roulet maintained, which pilloried bankers for their greed. Articles such as “What Good Is Wall Street?,” by the New Yorker journalist John Cassidy, suggested the world might be better off without firms like Goldman Sachs and Morgan Stanley.

  That some media outlets relished depicting Wall Street in a negative light after the subprime meltdown is true enough. For members of a stigmatized organization, though, the bankers in Roulet’s study were doing strikingly well for themselves. In his study of stigma, Goffman took it as given that an individual with a “spoiled identity” would be hobbled by this designation: “We effectively, if often unthinkingly, reduce his life chances.” In the aftermath of the 2008 financial meltdown, however, the life chances of Wall Street bankers showed no signs of suffering. In the year after the crash, the average pay at Goldman Sachs, Morgan Stanley, and JPMorgan Chase rose 27 percent. Negative press coverage did not stop the leading firms on Wall Street from handing out tens of billions of dollars in bonuses. Nor did it damage the careers of the attorneys and lobbyists who did their bidding, lawyers like Eugene Scalia, who earned millions while helping the financial industry try to gut the Dodd-Frank Act and other regulations.

  How disqualified from social acceptance could an industry that handed out such bonuses be said to be? How tainted and discounted were its employees and enablers likely to feel? Not very, common sense suggested, owing to a factor that was conspicuously absent from Roulet’s analysis of the financial industry and that was largely missing from Goffman’s study as well. This factor was power. As Goffman observed, stigma was acquired through relationships, social interactions during which stereotypes were formulated that led individuals to feel tainted and discounted. But as the scholars Bruce Link and Jo Phelan have argued, the potency of these stereotypes was entirely dependent on how powerful the people making them were. To illustrate this point, they described a hypothetical scenario in which patients at a mental health hospital applied derogatory labels to staff members who were arrogant and cold. The patients might mock these staff members behind their backs. They might wish to cast them into a discredited category. Even so, Link and Phelan maintained, “the staff would not end up being a stigmatized group. The patients simply do not possess the social, cultural, economic and political power to imbue their cognitions about staff with serious discriminatory consequences.” It was power—including the power “to control access to major life domains like educational institutions, jobs, housing, and health care”—that put “consequential teeth” into a stigma. It was, in turn, the absence of power that made people vulnerable to being stigmatized, saddled with a “spoiled identity” that hampered their life chances.

  Power did not shield bankers and other successful white-collar professionals (lawyers, lobbyists, tech workers) from moral opprobrium. But it made this opprobrium far less blighting and damaging: to their income, to their status, to their dignity and self-esteem. Bankers who continued earning lavish bonuses after the financial meltdown could “manage” stigma in ways that dirty workers could not—by, for example, donating money to a philanthropic organization, a virtue-affirming gesture unavailable to workers of lesser means. Even if some people looked askance at what they did, success could also breed an air of superiority and entitlement that made criticism far easier to dismiss. This helped explain why, in the aftermath of the 2008 financial crash, many investment bankers felt not tainted and discounted but indignant and aggrieved, outraged at the mere thought of having their industry regulated and of being subjected to public reproach. The outrage was an example of what the political philosopher Michael Sandel has called “meritocratic hubris,” the inflated self-regard accrued by elites who managed to obtain degrees from the best law schools, business schools, and engineering programs, ostensibly because of their talent and hard work. This was the premise of meritocracy, a system that sorted people into different income brackets and career paths based on their ability to secure coveted spots at elite educational institutions. As Sandel has observed, one consequence of this system has been to diminish the dignity and self-esteem of working-class people who do not have degrees from top universities and whose fortunes have declined or stagnated in recent decades. Another is to burnish the moral credentials of society’s “winners,” hyper-educated achievers who have been encouraged “to regard their success as their own doing, a measure of their virtue—and t
o look down upon the less fortunate.”

  The hubris of successful meritocrats is unwarranted, some would argue, because hyper-educated achievers so often come from privileged families and wealthy backgrounds. Yet it is rooted in something successful meritocrats accurately sense, which is that even people who hold them in contempt simultaneously envy and admire them. For a job to qualify as dirty work, it needs to involve doing something that “good people”—the so-called respectable members of society—see as morally sullying and would never want to do themselves. This is true of the work performed by slaughterhouse laborers and prison guards; by the “joystick warriors” in the military; by roustabouts like Stephen Stone. It is not true of software engineers and site reliability engineers who work in Silicon Valley, or for that matter financiers and bankers on Wall Street.

  “SEE NO EVIL, SPEAK NO EVIL”

  Investment bankers and software engineers tend to be spared the indignities that encumber dirty workers. But this hardly means the companies they work for play no role in profiting from, and shaping, dirty work. After I spoke with Laura Nolan, it occurred to me that Project Maven could be viewed another way: as the template for the kind of dirty work that will proliferate in the future, a brave new world in which ethically troubling tasks will increasingly be delegated not to human laborers but to robots and machines. The “problem” of how to continue fighting endless wars could be solved with autonomous weapons systems programmed to strike targets on their own, obviating the need to hire “desk warriors” to do the killing. The dirty work of extracting fossil fuels could be accomplished not by hiring riggers like Stephen Stone but through cloud services and artificial intelligence. This was indeed already happening, according to a 2019 article in the online publication Gizmodo, which described the lucrative deals that Amazon, Google, and other tech firms had struck with oil companies to provide them with such services. “Google is using machine learning to find more oil reserves both above and below the seas, its data services are streamlining and automating extant oilfield operations, and it is helping oil companies find ways to trim costs and compete with clean energy upstarts,” noted the article, which accused “Big Tech” of “automating the climate crisis.”

  Automating such functions would not entirely eliminate the need for human beings to participate in them: someone would still need to design and program the machines. But it could limit their participation to technical tasks for which it would be easy to diffuse responsibility, something Laura Nolan felt was inherent to high-tech work. By way of example, Laura mentioned the hidden tracking mechanisms that companies like Google used to harvest users’ personal data for advertisers, which she had come to regard as unethical. Google was the pioneer of this practice, Shoshana Zuboff argued in The Age of Surveillance Capitalism, honing the art of personal data mining through its “ever-expanding extraction architecture.” But even if some software engineers at Google might have agreed that creating this invisible architecture was distasteful, few were likely to feel implicated in its construction and design, because their jobs involved doing other, more mundane things. “There are not that many people who are actually writing the codes that are deciding what data to collect on an individual,” said Laura. “An awful lot of people are just writing code that is looking after the servers. For every one person who is doing something that could directly be seen as problematic, probably hundreds or thousands of people are just doing the housework—the plumbing and the cleaning.

  “And you know, doing housework isn’t wrong,” she continued. “It’s very easy to diffuse the responsibility for work that you might not have a direct hand in. It’s like me with Project Maven. I was not being asked to write code that was tracking pine-nut farmers that were gonna get blown up. I was asked to do a thing that enabled that.”

  Technology alone did not ensure that responsibility would be diffused. As we have seen, it did not stop imagery analysts in the drone program from being inundated by graphic images—burned bodies, cratered homes—that could cause severe emotional distress. But the engineers at Google did not witness such images. Like Laura, they were several steps removed from the consequences of their actions, performing specialized functions whose exact purposes might not be clear even to them. As Laura noted, one reason for this was that technology was fairly easy to design for one use and then repurpose for another. While workers at a plant that manufactured tanks knew how the vehicles they were building would be used, this was not true of code, she said. “Code is a lot more plastic. In tech, you can build or design something and be told it is for purpose A and then very easily have it adapted to evil purpose B.”

  The plasticity enabled employers to keep workers in the dark about what was actually going on, a problem Jack Poulson noted as well. With the Dragonfly project, “employees just had no idea what the impact was of the things they were focused on,” he said. “Even the privacy review team didn’t know in some cases.” Compounding this was another problem, which is that the work was parceled up and fragmented. While working at Google, Jack came across the work of Ursula Franklin, a physics professor who drew a distinction between holistic and prescriptive technologies. Holistic technologies were crafts performed by artisans (potters, metalsmiths) who, as Franklin put it, “control the process of their own work from beginning to end.” In prescriptive technologies, by contrast, the work was divided into small steps, which led workers to focus narrowly on the discrete tasks they were given. This was the reality in the high-tech world, Jack had concluded, a world where the natural impulse was to concentrate on meeting narrow technical benchmarks that seemed devoid of moral consequences, which could easily be put out of mind. “You can very easily compartmentalize,” he said.

  * * *

  Yet if some tech workers could be accused of benefiting from a system in which responsibility was diffused, in a different way so could all of us: the tech industry’s patrons and consumers. Since the tech backlash and Silicon Valley’s fall from grace, many consumers have developed a more jaundiced view of large social media and technology companies. This has notably failed to reduce the amount of time most people spend gazing at their laptops and smartphones or led the public to pay closer attention to other things, like the conditions under which the gadgets in their pockets are produced.

  As numerous human rights organizations have shown, the global tech supply chain is anything but clean. On one end of this chain are the electronic gadgets on display at Best Buy and Apple Stores. On the other end are the artisanal mines featured in a report researched jointly by Amnesty International and African Resources Watch. The mines are located in Kolwezi, a city in the Democratic Republic of the Congo, which produces more than half of the world’s cobalt, a key ingredient in the rechargeable ion batteries that power laptops and mobile phones (as well as electric cars). The creuseurs (diggers) who toil in the mines endure appalling conditions, the Amnesty report indicated, working twelve- to fourteen-hour days without gloves or face masks while breathing in toxic chemicals that can cause ailments such as the potentially fatal “hard metal lung disease.” Many of the workers are children, driven to work in the mines because of desperate poverty and because their families cannot afford to send them to school. Deaths are common as creuseurs descend into narrow, makeshift mines that often collapse. (The dangerous conditions called to mind Orwell’s description of the mines in The Road to Wigan Pier. In the metabolism of global capitalism, coal had given way to cobalt, but some things hadn’t changed.) The rocks they scrape out with primitive tools are sold to companies like Congo Dongfang Mining International, a subsidiary of Huayou Cobalt, which is based in China. Eventually, some of the cobalt makes its way into products sold by companies such as Microsoft, Samsung, and Apple.

  The report, “‘This Is What We Die For,’” was published in 2016. Three years later, I met Catherine Mutindi, a nun who founded Bon Pasteur, a nongovernmental organization based in Kolwezi that runs a community development program offering free education to children employe
d in the artisanal mining sector. As the report made clear, companies like Apple and Microsoft weren’t directly involved in the Congo’s brutally exploitative mining sector. They relied on middlemen further “downstream”—middlemen who, in effect, did the dirty work for these companies and their customers, arranging for the cobalt to be extracted and delivered without too many questions asked. After the Amnesty International report appeared, many companies took steps to upgrade their sourcing practices, Mutindi told me, joining ventures such as the Responsible Raw Materials Initiative, which sought to improve the vetting of suppliers and to strengthen due diligence policies. According to Mutindi, however, these initiatives had done little to change conditions on the ground. Not long after we met, she sent me a link to a story in The Guardian about a lawsuit that fourteen Congolese families had filed against some of the world’s largest tech companies, which they claimed were complicit in dangerous conditions that had caused their children to suffer serious injury or death. One of the children, paid seventy-five cents a day to haul bags of cobalt rocks, was paralyzed after falling in a tunnel. Another was buried alive when a tunnel collapsed. Among the companies named in the lawsuit were Apple, Microsoft, and Dell, targeted because they had “the authority and resources to supervise and regulate their cobalt supply chains” but failed to do so, the plaintiffs alleged. All of the companies denied this allegation. An Apple spokesperson told The Guardian, “Apple is deeply committed to the responsible sourcing of materials that go into our products. We’ve led the industry by establishing the strictest standards for our suppliers.” A spokesperson for Dell said, “Dell Technologies is committed to the responsible sourcing of minerals, which includes upholding the human rights of workers at any tier of our supply chain and treating them with dignity and respect.” A Microsoft spokesperson said, “If there is questionable behavior or possible violation by one of our suppliers, we investigate and take action.”

 

‹ Prev