Power, for All

Home > Other > Power, for All > Page 16
Power, for All Page 16

by Julie Battilana


  Representatives of Barefoot College visited her village, the way they visit similar rural communities every day, all over the world. They invited the villagers to participate in the college’s flagship solar program to bring this renewable and sustainable form of energy to their community. There were, however, a few conditions: The villagers would have to designate one of them to go through the training; this person would have to commit to coming back to install and maintain the equipment, which the college would supply; and they had to agree to pay the trained solar engineer to maintain the service. This was all well and good; but there was one more condition that took the villagers, used to men being in charge, by surprise. The person would have to be a woman, ideally middle-aged and a mother or a grandmother, who—Barefoot College representatives knew from experience—would return at the end of the training and not abandon her community.

  As the college representatives stated these conditions, many of the villagers intuitively turned to Nezuma. They would often look to her for help and advice when they were in trouble. But at first, joining the college seemed impossible to Nezuma, not only for logistical reasons, but also because her husband would never allow it. He was utterly opposed to her leaving her family and living by herself for five months. With the help of the village’s chief, who could see how much the program would benefit the community, and Nezuma’s mother, who agreed to help take care of the family during her daughter’s absence, the Barefoot College representatives convinced Nezuma’s husband to let her participate. The college staff also arranged for him to visit his wife once a week. For the next five months, Nezuma lived on the campus in Kinyasini, learning to install and repair solar electrical systems from other women who had themselves become solar engineers through the program. Because most of these women were illiterate or semiliterate, most of the teaching was done using visual learning tools, such as color-coded pictures and manuals. Nezuma didn’t learn only about solar electricity. Meagan Fallone, the social entrepreneur who succeeded Bunker as Barefoot College CEO, enriched the program with training on women’s health and rights as well as digital and entrepreneurial skills. “We want our trainees to learn about technology and about their bodies, their rights, and their responsibilities,” Meagan told us. “This holistic approach truly transforms them.”9

  Back in her village at the end of the program, Nezuma felt a new sense of power. Bright and hard-working, she electrified the community. And she received help from an unexpected source: her husband. Transformed from a skeptic to a staunch ally, he could now regularly be found stabilizing the ladder while Nezuma was on a neighbor’s roof or assisting her with a repair. He had realized how much he and his family had to gain from Nezuma’s newfound prominence. In a few months, she had gained control over access to one of the most valuable and sought-after technologies in her village. Her newfound power made her the first woman to participate in the influential village council. This was the genius of Barefoot College’s innovation: The middle-aged women who went through its program became not only a literal source of power for their villages, but also powerful themselves.

  Bunker and Meagan understood the potential of technology to change the power map for the better. Over the course of history, technological and scientific advances have done more than just greatly improve our daily existence, as psychologist Steven Pinker notes in his book Enlightenment Now. They have also brought us closer to becoming “like lords and possessors of Nature,” in the words of French scientist and philosopher René Descartes.10 For Descartes, science and technology were the gateway for people to understand, interpret, and analyze nature, and thereby gain a measure of control over Mother Nature herself.11 We have become so astoundingly powerful that we have even developed plans to prevent massive asteroids from hitting Earth and annihilating us.12

  The digital revolution that reached warp speed at the turn of the twenty-first century has increased our power at a staggering pace.II In 1989, at one of the world’s largest physics laboratories, Tim Berners-Lee and Robert Cailliau invented a new network for sharing and searching information. They called their invention the World Wide Web. As they discussed what to do with it, they had a conversation that would have a tremendous impact on the way we live, work, and play. They were debating whether to patent their discovery, which would prevent its replication, use, or improvement by others. As Cailliau recounts it, “[Tim] said, ‘Robert, do you want to be rich?’ I thought, Well, it helps, no? He apparently didn’t care about that. What he cared about was to make sure that the thing would work, that it would just be there for everybody.”13

  Their decision to keep the World Wide Web open-source and freely accessible to all embodied their vision of technology as an equalizing force. Berners-Lee believed that it could help liberate human potential through collaboration and knowledge-sharing at unprecedented scale. He imagined that it would, for once, give everyone access to knowledge. And to some extent he was right. Those of us who can access the internet do have a limitless supply of information at our fingertips. This ability to more easily access information and connect with one another online has provided new channels through which grassroots movements can challenge existing power hierarchies, as we saw with the #MeToo and Black Lives Matter movements.14

  Taken together, these shifts have given humankind many more ways to satisfy our needs for safety and self-esteem. But more power overall doesn’t necessarily mean more power for all. It takes intentional interventions like the one developed by Barefoot College for technology to empower people whose access to valued resources is constrained by existing power hierarchies. In the absence of such purposeful use of technology, the digital revolution has mostly fallen short of the great equalizing force that Berners-Lee and Cailliau envisioned it could be. Like previous technological waves, it changed the distribution of power, but it did not benefit everyone equally.15 A few have emerged as the digital revolution’s big winners, with tremendous power concentrated in their hands.

  HOW THE DIGITAL REVOLUTION SHUFFLED THE CARDS

  One of the earliest traces of an algorithm was found in Iraq, inscribed on a Sumerian clay tablet dating back to around 2500 BCE. It consisted of a set of instructions for the task of division.16 At its root, an “algorithm” is just another word for a set of directions.17 Algorithms have evolved a great deal since their early appearance, and today, the word is most commonly used to refer to “a sequence of instructions telling a computer what to do.”18 These instructions can be input by humans. For instance, a coder might write an algorithm telling the computer to generate the shortest walking route between point A and point B. But in the era of big data enabled by the digital revolution, computers can be made to write their own instructions based on large sets of data inputs and outputs provided by the coders. For example, if the coder feeds the computer a list of divisions, without the computer knowing what division is, the machine will spot the pattern and learn to replicate it on its own. This is what people mean by “machine learning,” an application of artificial intelligence (AI) that has tremendously accelerated our ability to process and learn from massive amounts of data, and to optimize and increase the efficiency, precision, and predictive accuracy of machines.19

  Digital technology is producing astounding progress, improving our lives in countless ways. Coupled with big data, machine-learning algorithms can learn from thousands of medical images to recognize a cancerous mass in human tissue earlier and more accurately than the human eye can.20 Mobile health technology is improving and reducing the cost of transmitting health-care information, delivering patient care, and monitoring medication adherence, with the potential to democratize access to health solutions in rural areas in low- and middle-income countries worldwide.21 And technology’s benefits go well beyond medicine. With the aid of its innovative tools and processes, we are improving the productivity of natural resources and sources of energy, the safety and performance of motor vehicles and industrial materials, the availability and affordability of information
and consumer goods, and myriad other conveniences and opportunities to satisfy our need for safety and self-esteem.

  Algorithmic decision-making is improving our lives in large part because it uses vastly greater amounts of data than humans can process, and it is consistent, precise, and reliable in ways that people aren’t. As a result, it has the potential to benefit people whom human decision makers, consciously or unconsciously, discriminate against. For example, a study of automated loan underwriting showed that it can be more accurate than human underwriting in predicting defaults, resulting in a higher rate of approved loans, especially for underserved customers.22

  Alongside this progress and potential for empowerment, however, digital technology and AI have also affected two critical aspects of the distribution of power that require our vigilance and oversight: the control of algorithms and the control of personal information.

  First, control of algorithms is critical because they can be biased, and when they are, the scale of their application enabled by digital technology and big data means that even small biases can affect large numbers of people.23 For example, the U.S. government and its law enforcement agencies are increasingly using machine-learning algorithms to police neighborhoods and identify and monitor criminals. Helpful as technology like facial recognition–enabled cameras in public transit or biometric data collection in airports may be to reduce and prevent crime, the imperfections of the algorithms that power them can be disproportionately damaging to certain segments of the population, oftentimes those that are disadvantaged already. To wit, facial recognition algorithms are five to ten times more likely to misidentify Black faces than they do White ones, increasing the chances of innocent Black men and women being profiled and prosecuted for crimes they have not committed.24 One reason why facial recognition systems display lower accuracy on darker skin is that many of the datasets used to train and test facial analysis systems are not properly representative. As Joy Buolamwini of the MIT Media Lab put it, “if the training sets aren’t really that diverse, any face that deviates too much from the established norm will be harder to detect.”25 It’s not hard to see how such bias, unintentional as it may be, contributes to perpetuating and deepening existing power hierarchies.26

  These unintended consequences of algorithmic decision-making are hard for us to detect and fix, unaware as we often are that algorithms are affecting our lives at all, and accustomed as many of us are to assuming that algorithms are “neutral.” Some engineering whiz wrote a set of rules, the thinking goes, and since this is math, the result must be unbiased and factual.27 But the devil is in the details. “Algorithms are opinions embedded in code,” as mathematician Cathy O’Neil put it, and assuming that their output is always “objective” leads us to dissociate it from human responsibility and accountability.28

  The people making these important choices are coders and computer engineers. Who these people are and how they work can also introduce bias into the algorithms they build. Only a small percentage of the technical employees at companies like Apple and Facebook are women and BIPOC (Black, Indigenous, and People of Color).29 Yet diverse perspectives are critical to identifying biases and applying critical lenses to testing and tweaking algorithms. Also, for the most part, their employers are corporations, and they are acting on the orders of executives who, in turn, are accountable to investors, who expect profits. As a result, they can’t share the code for their company’s golden goose, and nobody can access, analyze, or challenge the algorithms. This lack of transparency keeps control in the hands of engineers and companies that develop and profit from the products, free from public oversight and accountability.

  The second power shift effected by the digital revolution is the transfer of control over personal information. While digital technologies have given us access to unlimited information online, we have very little control over the information about us available there. Imagine a world in which your every movement could be observed: Every step you took, every meal you ate, and every conversation you had could be monitored. The English philosopher Jeremy Bentham first conceived of such a system as a way to structure prisons, in which a single guard would be both invisible to and have an unobstructed view of every inmate all the time. Bentham called this system a “panopticon.” French philosopher Michel Foucault borrowed Bentham’s concept in the 1970s and used it as a metaphor to describe how, through the use of surveillance, the few could exercise social control over the many.30 The result would be a society remarkably ordered and regulated, but completely unfree. Today, whether we know it or not, more and more aspects of our lives unfold under a digital panopticon, and we can’t always tell that we are being watched.

  As we use our digital devices, millions of data points are recorded, leaving traces of our habits, desires, and needs. Matched with increased data storage capabilities, this information gives those with access to it an unprecedented opportunity to learn about and surveil us. Without oversight and accountability, such surveillance can quickly turn from benign data gathering to ominous control, which is particularly frightening in the hands of authoritarian governments.31

  Governments are far from the only entities with access to massive amounts of data about us. Companies can use monitoring software to track how much time per day employees spend typing, to take screenshots of their computers at random, to record every website they visit, and to report this information to their managers.32 Some, especially large tech companies, have access to information not only about their employees, but about all of us. Amazon knows our taste in shopping. Alexa stores snippets of our conversations. Facebook knows what we like and whom we look up to, whom we text and call on WhatsApp, to whom we owe money, and what type of content is more likely to anger us so that we stay on the site longer. Apple Watch captures our heart rates. Through our searches and YouTube views, Google knows what we are interested in. It can even know where we are at a given time, what we sound like, and how we look.33 Because they know what we need and want, these companies have tremendous power that can benefit or harm us depending on how it is used and by whom.

  The temptation to use control over highly valued resources for less than virtuous purposes is ever-present. In her book The Age of Surveillance Capitalism, social psychologist Shoshana Zuboff meticulously documented how companies profit from using and selling our personal data.34 Initially, tech companies captured users’ data to improve the services they offered. Then, in the 1990s, some began using this information to generate revenue by targeting us with ads they knew we were likely to respond to. This business model soon spread, and the race for people’s attention was on. “Engagement” became the “currency of the attention economy,”35 while data became “the new oil.”36 Big tech companies didn’t stop at targeted ads, when they realized they could flat-out sell our data to interested parties. Their clients are insurance companies, banks, employers, political campaigns, anyone who is willing and ready to pay to know what we want. Their pitch to potential clients is simple: pay us and we can make people do what you want, buy your products, sign up for your services, even vote for you in the next election.

  People at YouTube, Apple, Netflix, and Amazon decide how the site will make recommendations on what you should watch next. People at Facebook, LinkedIn, Twitter, and TikTok decide how algorithms select the content that goes in their newsfeeds. They decide what news we see first online, what posts show up in our newsfeeds, what products pop up when we browse a website, and whom we match with on dating applications. As Twitter’s cofounder Jack Dorsey acknowledged when he testified in front of Congress in 2018, “Every time someone opens up our service, every time someone opens up our app, we are implicitly incentivizing them to do something or not to do something.” He further noted, “I believe we need to question the fundamental incentives that are in our product today.”37

  Thanks to their control over algorithms and personal data, technology companies have become the gatekeepers of commerce and information channels. This is a stunning d
isplay of the power of betweenness: Consumers, employees and suppliers often have little choice but to go through Big Tech to buy, sell, and work. With limited alternatives, they cannot easily withdraw from using a technology or a platform, or from working for a company if they disagree with its practices. The tech companies have often used their dominance to build yet more power and abuse it through anti-competitive practices—like predatory pricing, exclusionary agreements, extortionate fees, self-preferences, and the acquisition of hundreds of rivals to squash the competition.38 As a result, corporations like Google, Amazon, Facebook, and Apple are monopolies or quasi-monopolies with greater economic power than some countries.39

  Where does this leave us? The big tech companies have a tight grip on the fundamentals of power. First, they know what we value in real-time, with precision, and at scale, and they can use this information to predict our behaviors. Second, they can unilaterally control not only what data are gathered about us, but also how those data are used to influence our beliefs and actions. Third, they can use their power to reduce alternatives for consumers, suppliers and competitors alike, and use their money to influence legislators and law enforcement into giving them a great deal of leeway. As a result of such a large and unchecked power imbalance, the decisions the technical experts who lead and work for these companies make, deliberately or unwittingly, may not account for our interests and well-being. Often, they don’t. What is at stake, therefore, is our ability, and our children’s, to think critically and decide for ourselves what we want and how to behave.40 History has taught us that preserving this ability is critical in the face of propaganda and power concentration. In the digital era, it is particularly crucial as we grapple with deep fakes, fake news, and coordinated disinformation campaigns aimed at altering the very basis of reality.

 

‹ Prev