Algorithms of Oppression

Home > Nonfiction > Algorithms of Oppression > Page 18
Algorithms of Oppression Page 18

by Safiya Umoja Noble


  Google is a powerful and important resource for organizing information and facilitating social cooperation and contact, while it simultaneously reinforces hegemonic narratives and exploits its users. This has widely been characterized by critical media scholars as a dialectic that has less to do with Google’s technologies and services and more to do with the organization of labor and the capitalist relations of production.38 The notion that Google/Alphabet has the potential to be a democratizing force is certainly laudable, but the contradictions inherent in its projects must be contextualized in the historical conditions that both create it and are created by it. Thinking about the specifics of who benefits from these practices—from hiring to search results to technologies of surveillance—these are problems and projects that are not equally experienced. I have written, for example, with my colleague Sarah T. Roberts about the myriad problems with a project such as Google Glass and the problems of class privilege that directly map to the failure of the project and the intensifying distrust of Silicon Valley gentrifiers in tech corridors such as San Francisco and Seattle.39 The lack of introspection about the public wanting to be surveilled at the level of intensity that Google Glass provided is part of the problem: centuries-old concepts of conquest and exploration of every landscape, no matter its inhabitants, are seen as emancipatory rather than colonizing and totalizing for people who fall within its gaze. People on the street may not characterize Google Glass as a neocolonial project in the way we do, but they certainly know they do not like seeing it pointed in their direction; and the visceral responses to Google Glass wearers as “Glassholes” is just one indicator of public distrust of these kinds of privacy intrusions.

  The neocolonial trajectories are not just in products such as search or Google Glass but exist throughout the networked economy, where some people serve as the most exploited workers, including child and forced laborers,40 in such places as the Democratic Republic of Congo, mining ore called columbite-tantalite (abbreviated as “coltan”) to provide raw materials for companies such as Nokia, Intel, Sony, and Ericsson (and now Google)41 that need such minerals in the production of components such as tantalum capacitors, used to make microprocessor chips for computer hardware such as phones and computers.42 Others in the digital-divide network serve as supply-chain producers for hardware companies such as Apple43 or Dell,44 and this outsourced labor from the U.S. goes to low bidders that provide the cheapest labor under neoliberal economic policies of globalization.

  To review, in the ecosystem, Black people provide the most grueling labor for blood minerals, and they do the dangerous, toxic work of dismantling e-waste in places such as Ghana, where huge garbage piles of poisonous waste from discarded electronics from the rest of the world are shipped. In the United States, Black labor is for the most part bypassed in the manufacturing sector, a previous site of more stable unionized employment, due to electronics and IT outsourcing to Asia. African American identities are often a commodity, exploited as titillating fodder in a network that traffics in racism, sexism, and homophobia for profit. Meanwhile, the onus for change is placed on the backs of Black people, and Black women in the United States in particular, to play a more meaningful role in the production of new images and ideas about Black people by learning to code, as if that alone could shift the tide of Silicon Valley’s vast exclusionary practices in its products and hiring.

  Michele Wallace, a professor of English at the City College of New York and the Graduate Center of the City University of New York (CUNY), notes the crisis in lack of management, design, and control that Black people have over the production of commercial culture. She states that under these conditions, Black people will be “perpetual objects of contemplation, contempt, derision, appropriation, and marginalization.”45 Janell Hobson at the University of Albany draws important attention to Wallace’s commentary on Black women as creative producers and in the context of the information age. She confirms this this confluence of media production on the web is part of the exclusionary terrain for Black women, who are underrepresented in many aspects of the information industry.46 I would add to her argument that while it is true that the web can serve as an alternative space for conceiving of and sharing empowered conceptions of Black people, this happens in a highly commercially mediated environment. It is simply not enough to be “present” on the web; we must consider the implications of what it means to be on the web in the “long tail” or mediated out of discovery and meaningful participation, which can have a transformative impact on the enduring and brutal economic and social disenfranchisement of African Americans, especially among Black women.

  Social Inequality Will Not Be Solved by an App

  An app will not save us. We will not sort out social inequality lying in bed staring at smartphones. It will not stem from simply sending emails to people in power, one person at a time. New, neoliberal conceptions of individual freedoms (especially in the realm of technology use) are oversupported in direct opposition to protections realized through large-scale organizing to ensure collective rights. This is evident in the past thirty years of active antilabor policies put forward by several administrations47 and in increasing hostility toward unions and twenty-first-century civil rights organizations such as Black Lives Matter. These proindividual, anticommunity ideologies have been central to the antidemocratic, anti-affirmative-action, antiwelfare, antichoice, and antirace discourses that place culpability for individual failure on moral failings of the individual, not policy decisions and social systems.48 Discussions of institutional discrimination and systemic marginalization of whole classes and sectors of society have been shunted from public discourse for remediation and have given rise to viable presidential candidates such as Donald Trump, someone with a history of misogynistic violence toward women and anti-immigrant schemes. Despite resistance to this kind of vitriol in the national electoral body politic, society is also moving toward greater acceptance of technological processes that are seemingly benign and decontextualized, as if these projects are wholly apolitical and without consequence too. Collective efforts to regulate or provide social safety nets through public or governmental intervention are rejected. In this conception of society, individuals make choices of their own accord in the free market, which is normalized as the only legitimate source of social change.49

  It is in this broader social and political environment that the Federal Communications Commission and Federal Trade Commission have been reluctant to regulate the Internet environment, with the exception of the Children’s Internet Protection Act50 and the Child Safe Viewing Act of 2007.51 Attempts to regulate decency vis-à-vis racist, sexist, and homophobic harm have largely been unaddressed by the FCC, which places the onus for proving harm on the individual. I am trying to make the case, through the mounting evidence, that unregulated digital platforms cause serious harm. Trolling is directly linked to harassment offline, to bullying and suicide, to threats and attacks. The entire experiment of the Internet is now with us, yet we do not have enough intense scrutiny at the level of public policy on its psychological and social impact on the public.

  The reliability of public information online is in the context of real, lived experiences of Americans who are increasingly entrenched in the shifts that are occurring in the information age. An enduring feature of the American experience is gross systemic poverty, whereby the largest percentages of people living below the poverty line suffering from un- and underemployment are women and children of color. The economic crisis continues to disproportionately impact poor people of color, especially Black / African American women, men, and children.52 Furthermore, the gap between Black and White wealth has become so acute that a recent report by Brandeis University found that this gap quadrupled between 1984 and 2007, making Whites five times richer than Blacks in the U.S.53 This is not the result of moral superiority; this is directly linked to the gamification of financial markets through algorithmic decision making. It is linked to the exclusion of Blacks, Latinos, and Native Ame
ricans from the high-paying jobs in technology sectors. It is a result of digital redlining and the resegregation of the housing and educational markets, fueled by seemingly innocuous big-data applications that allow the public to set tight parameters on their searches for housing and schools. Never before has it been so easy to set a school rating in a digital real estate application such as Zillow.com to preclude the possibility of going to “low-rated” schools, using data that reflects the long history of separate but equal, underfunded schools in neighborhoods where African Americans and low-income people live. These data-intensive applications that work across vast data sets do not show the microlevel interventions that are being made to racially and economically integrate schools to foster educational equity. They simply make it easy to take for granted data about “good schools” that almost exclusively map to affluent, White neighborhoods. We need more intense attention on how these types of artificial intelligence, under the auspices of individual freedom to make choices, forestall the ability to see what kinds of choices we are making and the collective impact of these choices in reversing decades of struggle for social, political, and economic equality. Digital technologies are implicated in these struggles.

  These dramatic shifts are occurring in an era of U.S. economic policy that has accelerated globalization, moved real jobs offshore, and decimated labor interests. Claims that the society is moving toward greater social equality are undermined by data that show a substantive decrease in access to home ownership, education, and jobs—especially for Black Americans.54 In the midst of the changing social and legal environment, inventions of terms and ideologies of “colorblindness” disingenuously purport a more humane and nonracist worldview.55 This is exacerbated by celebrations of multiculturalism and diversity that obscure structural and social oppression in fields such as education and information sciences, which are shaping technological practices.56 Research by Sharon Tettegah, a professor of education at the University of Nevada, Las Vegas, shows that people invested in colorblindness are also less empathetic toward others.57 Making race the problem of those who are racially objectified, particularly when seeking remedy from discriminatory practices, obscures the role of government and the public in solving systemic issues.58

  Central to these “colorblind” ideologies is a focus on the inappropriateness of “seeing race.” In sociological terms, colorblindness precludes the use of racial information and does not allow any classifications or distinctions.59 Yet, despite the claims of colorblindness, research shows that those who report higher racial colorblind attitudes are more likely to be White and more likely to condone or not be bothered by derogatory racial images viewed in online social networking sites.60 Silicon Valley executives, as previously noted, revel in their embrace of colorblindness as if it is an asset and not a proven liability. In the midst of reenergizing the effort to connect every American and to stimulate new economic markets and innovations that the Internet and global communications infrastructures will afford, the real lives of those who are on the margin are being reengineered with new terms and ideologies that make a discussion about such conditions problematic, if not impossible, and that place the onus of discriminatory actions on the individual rather than situating problems affecting racialized groups in social structures.61

  Formulations of postracialism presume that racial disparities no longer exist, a context within which the colorblind ideology finds momentum.62 George Lipsitz, a critical Whiteness scholar and professor at the University of California, Santa Barbara, suggests that the challenge to recognizing racial disparities and the social (and technical) structures that instantiate them is a reflection of the possessive investment in Whiteness—which is the inability to recognize how White hegemonic ideas about race and privilege mask the ability to see real social problems.63 I often challenge audiences who come to my talks to consider that at the very historical moment when structural barriers to employment were being addressed legislatively in the 1960s, the rise of our reliance on modern technologies emerged, positing that computers could make better decisions than humans. I do not think it a coincidence that when women and people of color are finally given opportunity to participate in limited spheres of decision making in society, computers are simultaneously celebrated as a more optimal choice for making social decisions. The rise of big-data optimism is here, and if ever there were a time when politicians, industry leaders, and academics were enamored with artificial intelligence as a superior approach to sense-making, it is now. This should be a wake-up call for people living in the margins, and people aligned with them, to engage in thinking through the interventions we need.

  Conclusion

  Algorithms of Oppression

  We have more data and technology than ever in our daily lives and more social, political, and economic inequality and injustice to go with it. In this book, I have sought to critique the political-economic framework and representative discourse that surrounds racial and gendered identities on the web, but more importantly, I have shined a light on the way that algorithms are value-laden propositions worthy of our interrogation. I am particularly mindful of the push for digital technology adoption by Black / African Americans, divorced from the context of how digital technologies are implicated in global racial power relations. I have tried to show how traditional media misrepresentations have been instantiated in digital platforms such as search engines and that search itself has been interwoven into the fabric of American culture. Although rhetoric of the information age broadly seeks to disembody users, or at least to minimize the hegemonic backdrop of the technological revolution, African Americans have embraced, modified, and contextualized technology into significantly different frameworks despite the relations of power expressed in the socio-algorithms. This book can open up a dialogue about radical interventions on socio-technical systems in a more thoughtful way that does not further marginalize people who are already in the margins. Algorithms are, and will continue to be, contextually relevant and loaded with power.

  Toward an Ethical Algorithmic Future

  This book opens up new lines of inquiry using what I believe can be a black feminist technology studies (BFTS) approach to Internet research. BFTS could be theorized as an epistemological approach to researching gendered and racialized identities in digital and analog media studies, and it offers a new lens for exploring power as mediated by intersectional identities. More research on the politics, culture, and values embedded in search can help frame a broader context of African American digital technology usage and early adoption, which is largely underexamined, particularly from the perspectives of women and girls. BFTS is a way to bring more learning beyond the traditional discourse about technology consumption—and lack thereof—among Black people. Future research using this framework can surface counternarratives about Black people and technology and can include how African American popular cultural practices are influencing non–African American youth.1 Discourses about African Americans and women as technologically illiterate are nothing new, but dispelling the myth of Blacks / African Americans as marginal to the broadest base of digital technology users can help us define new ways of thinking about motivations in the next wave of technology innovation, design, and, quite possibly, resistance.

  Algorithms and Invisibility: My Interview with Kandis

  Most of the attention to the protection of online information has been argued legally as a matter of “rights.” Rights are a type of property, or entitlement, that function on the web through a variety of narratives, such as “free speech” and “freedom of expression,” all of which are constitutionally protected in the United States. The framing of web content and ownership of web URLs as “property” afforded private protections is of consequence for individuals, as noted in Jessie Daniels’s aforementioned work, which documents the misrepresentation of Dr. Martin Luther King Jr. at the site martinlutherking.org, a cloaked website managed by neo-Nazis and White supremacists at Stormfront.2 Private ownership of identity on the web
is a matter of who can pay and who lines up quickly enough to purchase identity markers that establish a type of official record about a person or a group of people. Indeed, anyone can own anyone else’s identity in the current digital landscape. The right to control over group and personal identity and memory must become a matter of concern for archivists, librarians, and information workers, and a matter of internet regulation and public policy.

 

‹ Prev