Book Read Free

Power, for All

Page 17

by Julie Battilana


  Yet concentrations of power are not immutable. It is always up to us to agitate, innovate, and orchestrate to change the power hierarchy, as many Google employees have shown us.

  CURBING THE POWER OF BIG TECH

  In March 2018, it came to light that Google had entered into a contract with the U.S. Department of Defense to help build an artificial intelligence tool that could analyze drone footage.41 Despite the company’s claim that the project was “non-offensive,” many Google employees were outraged. Realizing that they couldn’t do much by themselves to pressure the company to make substantive changes, they turned to collective organizing. They didn’t just want to agitate—they wanted to innovate and orchestrate change. “We wanted to build the capacity to make decisions ourselves, not just entreat with those in power,” Meredith Whittaker, one of the Google employees involved at the time, told us.42

  The result of their efforts was the development of an open letter demanding that Project Maven (as it was referred to) be canceled immediately. They also pushed for an innovation: demanding that Google draft and enforce a clear policy reflecting that neither Google nor any of its contractors would ever build “warfare technology.”43 This open letter was ultimately signed by more than 4,600 Google employees.44 When a month went by after the letter’s publication without any substantive reaction from Google’s senior leadership, almost a dozen Google employees resigned to protest the company’s continued involvement.45

  In the meantime, the story of Google’s involvement in Project Maven and its employees’ response had garnered widespread media coverage, which brought further pressure on the company to make a change. In response, the company announced that their eighteen-month contract with the Department of Defense would expire by March 2019 and not be renewed.46 Google CEO Sundar Pichai also released a statement describing a set of AI principles, which included not pursuing “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.”47 These guidelines asserted that artificial intelligence and its use should be socially beneficial, avoid creating or reinforcing unfair biases, be built and tested for safety, be accountable to people, incorporate privacy design principles, uphold high standards of scientific excellence, and be made available for uses that accord with these principles.

  After months of tireless work, organizers at Google began to see their efforts bear fruit. Collective action had given visibility to their demands, and the media coverage that it triggered gave them some power, because it had the potential to damage the company’s reputation, a resource that its leadership and shareholders cared deeply about. Yet, soon after this announcement, Google employees turned to collective action once again to push back against the company’s misuse of artificial intelligence.

  In August 2018, just a few months after the Project Maven protests, a report appeared in The Intercept, an online publication, bringing to light another Google initiative: Project Dragonfly.48 The report described Dragonfly as a prototype of a censored version of Google’s search engine being developed for use in China that would blacklist websites and search terms about human rights, democracy, religion, and peaceful protests. Building on their earlier successes, a group of Google employees decided to draft and circulate another open letter, this time calling for the development of an ethics review structure, which included meaningful employee input and participation at all levels as well as an ethical assessment of Project Dragonfly.49 Within three weeks, this letter had more than 1,400 signatories within the company. Organizers again effectively leveraged the news media to amplify their message. Amnesty International took up the cause too, publishing their own letter asking Google to stop the project, increase transparency about its position on censorship, and guarantee protections for whistleblowers.50

  As protests against Dragonfly continued, another bombshell exploded: In October 2018, news broke that Andy Rubin, the creator of the Android mobile operating system, had been paid $90 million when he left Google in 2014 after sexual assault allegations against him were deemed credible.51 Although Google’s leadership might well have used the huge payout as a faster and less litigious way to get an unwelcome senior employee out of the company, in the midst of the international reckoning about sexual harassment and gender discrimination spurred by the #MeToo movement, Google employees found the revelation that the company had allegedly protected a predator in their own ranks unconscionable. So once again, a group of Google employees began organizing a response. But this time, they moved beyond an open letter and took the protests from cyberspace into the physical world, too.

  On November 1, 2018, in cities around the world, Google employees took to the streets to protest. According to the organizers, more than 60 percent of all Google offices, amounting to thousands of employees, participated in the walkouts.52 The same day, they published an article in The Cut articulating five key demands: an end to forced arbitration; a commitment to end pay and opportunity inequity; a publicly disclosed sexual harassment transparency report; a clear, uniform, globally inclusive process for reporting sexual misconduct; and changes in organizational governance, including shifting the Chief Diversity Officer to report directly to the CEO and appointing an employee representative to the Board.53

  In February 2019, Google announced that it would meet one of the walkout’s key demands by ending forced arbitration for employee disputes.54 Yet no announcement was made on Project Dragonfly until July 16, 2019, when Google executive Karan Bhatia announced before the U.S. Senate Judiciary Committee that the project had been canceled.55

  The story of the Google employees is only one example of the many petitions and protests that tech workers have since been organizing to pressure their companies to act more ethically, not only at Google, but also at Amazon, Facebook, Salesforce, Microsoft, and Apple.56 Many have also proposed innovations and tried to orchestrate their adoption, because they realize that the tools they build are political. As Meredith Whittaker, who left Google in 2019, put it, what these employees want is nothing short of “a say and control over the products they build.”57 After decades that saw the pursuit of profit take an outsized role, they have mobilized to push Silicon Valley, and high-tech culture more broadly, toward emphasizing other dimensions, as well, like protecting democratic values and human rights in business decisions and making their workplaces more inclusive. It is this aspiration that, in 2021, led more than four hundred Google employees to create a union.58 Called the Alphabet Workers Union, after Google’s parent company, Alphabet, it strives “to protect Alphabet workers, our global society, and our world. We promote solidarity, democracy, and social and economic justice.”59

  This movement for change led by tech employees from the inside and activists from the outside is as important as it is challenging, in the face of these companies’ inordinate power. To provide citizens, consumers, and smaller companies with more information and more alternatives to the services these giant corporations offer, regulation is necessary too.

  REGAINING CONTROL

  If we are to rebalance power in the digital era, we need to gain some measure of control over our personal data and the algorithms that affect so many aspects of our lives. But where do we start? As individuals, we have some ways to protect our privacy and data, for example, using features that allow us to browse incognito and delete our browsing histories when we close our computers, or using alternative browsers and apps that better protect personal information. Yet these features are useful only up to a point, as our IP addresses are still visible, which means that our internet service providers, our employers, and/or the government can still track our activity online.

  Ultimately, protecting ourselves from bias in the algorithms and lack of control over personal data requires changing the laws and then making sure they are enforced. Addressing the 2021 Computers, Privacy and Data Protection conference, Apple’s CEO, Tim Cook, forcefully called for far-reaching data privacy reforms to “send a universal, humanistic respons
e to those who claim a right to users’ private information about what should not and will not be tolerated.”60 Sundar Pichai has also called for such regulation. In an op-ed published in the Financial Times in 2020, he insisted that regulation was needed and suggested that existing “rules such as Europe’s General Data Protection Regulation can serve as a strong foundation.”61

  In 2016, the European Union passed the General Data Protection Regulation (GDPR). This landmark legislation gives every European citizen free access to the information companies collect about them, forces companies to seek explicit consent for data collection, limits what data can be collected, and gives citizens the right to seek compensation for privacy breaches.62 The law’s passage did not mark the end of the work for activists who had been tirelessly pushing for it, however. On May 25, 2018, the day the regulation came into effect, Austrian lawyer and privacy activist Max Schrems and his organization, noyb (short for “none of your business”), filed legal cases against Facebook and Google, specifically targeting their now illegal take-it-or-leave-it privacy policies.63 In February 2020, noyb filed an additional complaint against Amazon on their data security practices.64

  Activists like Schrems and noyb see their role as that of a watchdog, bringing important cases and privacy issues to the attention of legal and regulatory agencies. In addition to changing the laws, the action of activists and organizations—a movement, in short—continues to be essential to leverage the new legislation and force companies as well as public authorities to comply with it. As legal scholar Lina Khan has noted, relevant laws are often already in the books, but they aren’t applied consistently and sometimes languish unused for decades, as is the case for antitrust legislation in the United States.65

  Voices like Khan’s are helping revive governmental oversight of Big Tech, with U.S. lawmakers and attorney generals at both state and federal levels beginning to challenge companies that operate as monopolies in many markets for their anti-competitive practices.66 So while European legislators have been the most aggressive thus far in fining technology corporations for behavior they consider an abuse of their market domination,67 governments around the world have been waking up to the threat of these extreme power imbalances, and legislative proposals are increasingly under consideration.68 Antitrust legislation is not the only domain in which there has been movement. In 2021, Australia passed a law requiring social media companies to pay for the journalism appearing on their platforms, despite their protestations—a landmark step toward restoring a measure of power for public-interest journalism.69 Judicial systems are also beginning to hold tech companies to account for algorithmic bias. In a watershed 2021 lawsuit brought by delivery riders against food app Deliveroo, a court in Bologna, Italy, ruled that even if an algorithm discriminates against workers unintentionally, a company can still be held liable and be forced to pay damages.70

  It isn’t easy to regulate complex technologies that tend to evolve rapidly, however. Aware of the critical role that activists with technical expertise can play in crafting effective regulation, Meredith, in her new position as the head of the AI Now Institute that she cofounded, has continued her advocacy to make the tech industry more inclusive and for the responsible and ethical use of artificial intelligence in relation to issues of race, gender, and power.71 In her 2019 testimony before the U.S. House Committee on Science, Space, and Technology, she outlined key priorities related to AI, including the need to halt both governmental and commercial use of facial recognition in sensitive social and political contexts until the risks are fully studied and adequate regulations, such as biometric privacy laws and assessments to vet algorithms for bias, are in place.72

  The latter will be particularly challenging, not only in terms of deciding where responsibility for assessing algorithms for bias should rest, but also because what constitutes a fair algorithm is a complex question, one that engineers, computer scientists, and legal scholars on the front lines of ethical AI development are asking with increasing urgency.73 But we do know some things about how algorithms function and where we are better equipped to exercise oversight. Computer scientists have a saying: “garbage in, garbage out,” meaning that if you feed an algorithm biased inputs, you will get biased outputs.74 Although those who develop these algorithms may not directly choose what the output of an algorithm will be (Al algorithms usually include too many input variables for anyone to control directly), they do control its parameters. Critically, they decide what data to “feed” the algorithm, and they fine-tune and tweak it to modify how it learns. In facial recognition technology, for example, the disproportionate misidentification of Black faces compared to White faces can partly be addressed by feeding the algorithm a disproportionate number of images of Black faces, and then measuring the accuracy of the algorithm’s output.75 Requiring transparency about the training data that are fed to the algorithm and the measurement of its outputs is an attainable goal for regulatory intervention.

  As we take action to regain control over technology, the kind of inclusive mindset that Bunker Roy had when he created the Barefoot College will be increasingly important. As his successor Meagan told us: “It is only when women in the developing world are able to sit at the table with engineers, designing technology that better meets human needs, that we will be on the way to creating a technological revolution that is inclusive to all.” Nezuma’s experience is a testament to the positive impact that technology can have on people’s lives when its design and implications are carefully thought through, and the time is taken to train people in its use.

  Such training is becoming more important by the day, as automation increasingly replaces human workers with computers and machines.76 The Organization for Economic Co-operation and Development (OECD) now puts 22 to 45 percent of existing jobs at risk of vanishing across its member countries.77 Some of the resistance to the technology companies stems not only from legitimate concern about their outsized power, but also from the loss of autonomy and sense of achievement that workers and professionals in many fields are experiencing as the resources they have to offer lose value in the marketplace. It’s a scary place to be, one that deprives people of their safety and self-esteem and can turn them into Luddites. But while automation can make humans increasingly irrelevant for tasks that can be codified and repeated, it also makes people increasingly indispensable in performing tasks that require creativity and social skills.78 The power of human workers over digital machines hinges on the execution of non-routine tasks, physical dexterity and versatility, ideation and originality, social perception, persuasion and trust, as well as the design of human-machine partnerships.

  Investing in people’s (re)training will require providing points of entry to education and skills development tailored to both adults and youth and adjusted to people’s level of literacy.79 This requires educational systems capable of developing not just people’s technical skills, but also their cultural, moral, artistic, scientific, and critical capabilities. These capabilities are what differentiates us from machines and what our unique value—and thus our power—depend on.

  PUTTING POWER IN THE HANDS OF MANY

  Humankind’s pursuit of safety and self-esteem has driven us to develop technologies that have enabled us to explore, control, and leverage our environment. We have learned to master new tools, from water purification and wind energy to smartphones and robots. And scientists have now embarked on the journey to modify the very essence of life itself: our DNA and that of other species. Advances in gene editing technology, like CRISPR,80 have the potential to cure many crippling diseases and modify forms of life to secure food supplies for all, while curbing deleterious human impacts on the environment. Yet, for all our mastery, the persistence of vast inequalities and the frequency of ecological crises from hurricanes to wildfires have taught us two critical lessons.

  The first is that with each wave of technological change, power changes hands but doesn’t necessarily become more equally distributed. The digital revolutio
n is one example among many of how new technologies can result in the concentration of power and wealth in the hands of a few individuals and organizations. Social entrepreneur Greg Brodsky put it this way: “Technology has disrupted almost every part of the economy: the gig economy, gaming, shopping, and how to book hotels. But the one thing the technology sector has not been willing to touch is ownership itself. In some ways, the tech sector is just recreating the wealth inequality in every other part of the economy.”81

  The second lesson is one in humility: Even the most sophisticated technologies will never allow us to control everything, as Mother Nature keeps reminding us. Some of the advances we have engineered have backfired. Our exploitation of the world’s resources has accelerated anthropogenic climate changes that risk transforming life on Earth as we know it. While coal and oil have literally powered modern industries and economies, the global warming resulting from greenhouse gas emissions is altering the natural equilibria that have made our climate predictable and allowed human civilizations to develop for the past twelve thousand years.82

  Although we cannot control everything, we do control how we choose to organize ourselves as a society. And we can control when, how, and for what purposes we use technology. Some will argue that markets are the solution to difficult decisions about whether a technology should be used. Yet the mere demand for a product does not warrant its existence. Investors lining up to get their share of a bustling surveillance technology market is no justification for its widespread use. Such unrestrained markets—free from political and moral oversight—powered the transatlantic slave trade that deported more than ten million Africans to America over three hundred years.83 Undiscerning reliance on markets also prompted the corporate world’s radical shift to the mantra of shareholder profit maximization. Corporate leaders and investors lost sight of the environmental and societal implications of their activities, exacerbating environmental destruction and socioeconomic inequalities. Leaving control of technology to the market alone will make the world less safe, less humane, and even more unequal and conceivably doom humanity to extinction.

 

‹ Prev