To be specific, knowledge of the technical aspects of search and retrieval, in terms of critiquing the computer programming code that underlies the systems, is absolutely necessary to have a profound impact on these systems. Interventions such as Black Girls Code, an organization focused on teaching young, African American girls to program, is the kind of intervention we see building in response to the ways Black women have been locked out of Silicon Valley venture capital and broader participation. Simultaneously, it is important for the public, particularly people who are marginalized—such as women and girls and people of color—to be critical of the results that purport to represent them in the first ten to twenty results in a commercial search engine. They do not have the economic, political, and social capital to withstand the consequences of misrepresentation. If one holds a lot of power, one can withstand or buffer misrepresentation at a group level and often at the individual level. Marginalized and oppressed people are linked to the status of their group and are less likely to be afforded individual status and insulation from the experiences of the groups with which they are identified. The political nature of search demonstrates how algorithms are a fundamental invention of computer scientists who are human beings—and code is a language full of meaning and applied in varying ways to different types of information. Certainly, women and people of color could benefit tremendously from becoming programmers and building alternative search engines that are less disturbing and that reflect and prioritize a wider range of informational needs and perspectives.
There is an important and growing movement of scholars raising concerns. Helen Nissenbaum, a professor of media, culture, and communication and computer science at New York University, has written with Lucas Introna, a professor of organization, technology, and ethics at the Lancaster University Management School, about how search engines bias information toward the most powerful online. Their work was corroborated by Alejandro Diaz, who wrote his dissertation at Stanford on sociopolitical bias in Google’s products. Kate Crawford and Tarleton Gillespie, two researchers at Microsoft Research New England, have written extensively about algorithmic bias, and Crawford recently coorganized a summit with the White House and New York University for academics, industry, and activists concerned with the social impact of artificial intelligence in society. At that meeting, I participated in a working group on artificial-intelligence social inequality, where tremendous concern was raised about deep-machine-learning projects and software applications, including concern about furthering social injustice and structural racism. In attendance was the journalist Julia Angwin, one of the investigators of the breaking story about courtroom sentencing software Northpointe, used for risk assessment by judges to determine the alleged future criminality of defendants.6 She and her colleagues determined that this type of artificial intelligence miserably mispredicted future criminal activity and led to the overincarceration of Black defendants. Conversely, the reporters found it was much more likely to predict that White criminals would not offend again, despite the data showing that this was not at all accurate. Sitting next to me was Cathy O’Neil, a data scientist and the author of the book Weapons of Math Destruction, who has an insider’s view of the way that math and big data are directly implicated in the financial and housing crisis of 2008 (which, incidentally, destroyed more African American wealth than any other event in the United States, save for not compensating African Americans for three hundred years of forced enslavement). Her view from Wall Street was telling:
The math-powered applications powering the data economy were based on choices made by fallible human beings. Some of these choices were no doubt made with the best intentions. Nevertheless, many of these models encoded human prejudice, misunderstanding, and bias into the software systems that increasingly managed our lives. Like gods, these mathematical models were opaque, their workings invisible to all but the highest priests in their domain: mathematicians and computer scientists. Their verdicts, even when wrong or harmful, were beyond dispute or appeal. And they tended to punish the poor and the oppressed in our society, while making the rich richer.7
Our work, each of us, in our respective way, is about interrogating the many ways that data and computing have become so profoundly their own “truth” that even in the face of evidence, the public still struggles to hold tech companies accountable for the products and errors of their ways. These errors increasingly lead to racial and gender profiling, misrepresentation, and even economic redlining.
At the core of my argument is the way in which Google biases search to its own economic interests—for its profitability and to bolster its market dominance at any expense. Many scholars are working to illuminate the ways in which users trade their privacy, personal information, and immaterial labor for “free” tools and services offered by Google (e.g., search engine, Gmail, Google Scholar, YouTube) while the company profits from data mining its users. Recent research on Google by Siva Vaidhyanathan, professor of media studies at the University of Virginia, who has written one of the most important books on Google to date, demonstrates its dominance over the information landscape and forms the basis of a central theme in this research. Frank Pasquale, a professor of law at the University of Maryland, has also forewarned of the increasing levels of control that algorithms have over the many decisions made about us, from credit to dating options, and how difficult it is to intervene in their discriminatory effects. The political economic critique of Google by Elad Segev, a senior lecturer of media and communication in the Department of Communication at Tel Aviv University, charges that we can no longer ignore the global dominance of Google and the implications of its power in furthering digital inequality, particularly as it serves as a site of fostering global economic divides.
However, what is missing from the extant work on Google is an intersectional power analysis that accounts for the ways in which marginalized people are exponentially harmed by Google. Since I began writing this book, Google’s parent company, Alphabet, has expanded its power into drone technology,8 military-grade robotics, fiber networks, and behavioral surveillance technologies such as Nest and Google Glass.9 These are just several of many entry points to thinking about the implications of artificial intelligence as a human rights issue. We need to be concerned about not only how ideas and people are represented but also the ethics of whether robots and other forms of automated decision making can end a life, as in the case of drones and automated weapons. To whom do we appeal? What bodies govern artificial intelligence, and where does the public raise issues or lodge complaints with national and international courts? These questions have yet to be fully answered.
In the midst of Google’s expansion, Google Search is one of the most underexamined areas of consumer protection policy,10 and regulation has been far less successful in the United States than in the European Union. A key aspect of generating policy that protects the public is the accumulation of research about the impact of what an unregulated commercial information space does to vulnerable populations. I do this by taking a deep look at a snapshot of the web, at a specific moment in time, and interpreting the results against the history of race and gender in the U.S. This is only one of many angles that could be taken up, but I find it to be one of the most compelling ways to show how data is biased and perpetuates racism and sexism. The problems of big data go deeper than misrepresentation, for sure. They include decision-making protocols that favor corporate elites and the powerful, and they are implicated in global economic and social inequality. Deep machine learning, which is using algorithms to replicate human thinking, is predicated on specific values from specific kinds of people—namely, the most powerful institutions in society and those who control them. Diana Ascher,11 in her dissertation on yellow journalism and cultural time orientation in the Department of Information Studies at UCLA, found there was a stark difference between headlines generated by social media managers from the LA Times and those provided by automated, algorithmically driven software, which generated sever
e backlash on Twitter. In this case, Ascher found that automated tweets in news media were more likely to be racist and misrepresentative, as in the case of police shooting victim Keith Lamont Scott of Charlotte, North Carolina, whose murder triggered nationwide protests of police brutality and excessive force.
There are many such examples. In the ensuing chapters, I continue to probe the results that are generated by Google on a variety of keyword combinations relating to racial and gender identity as a way of engaging a commonsense understanding of how power works, with the goal of changing these processes of control. By seeing and discussing these intersectional power relations, we have a significant opportunity to transform the consciousness embedded in artificial intelligence, since it is in fact, in part, a product of our own collective creation.
Figure 1.10. Automated headline generated by software and tweeted about Keith Lamont Scott, killed by police in North Carolina on September 20, 2016, as reported by the Los Angeles Times.
Theorizing Search: A Black Feminist Project
The impetus for my work comes from theorizing Internet search results from a Black feminist perspective; that is, I ask questions about the structure and results of web searches from the standpoint of a Black woman—a standpoint that drives me to ask different questions than have been previously posed about how Google Search works. This study builds on previous research that looks at the ways in which racialization is a salient factor in various engagements with digital technology represented in video games,12 websites,13 virtual worlds,14 and digital media platforms.15 A Black feminist perspective offers an opportunity to ask questions about the quality and content of racial hierarchies and stereotyping that appear in results from commercial search engines such as Google’s; it contextualizes them by decentering the dominant lenses through which results about Black women and girls are interpreted. By doing this, I am purposefully theorizing from a feminist perspective, while addressing often-overlooked aspects of race in feminist theories of technology. The professor emeritus of science and technology at UCLA Sandra Harding suggests that there is value in identifying a feminist method and epistemology:
Feminist challenges reveal that the questions that are asked—and, even more significantly, those that are not asked—are at least as determinative of the adequacy of our total picture as are any answers that we can discover. Defining what is in need of scientific explanation only from the perspective of bourgeois, white men’s experiences leads to partial and even perverse understandings of social life. One distinctive feature of feminist research is that it generates problematics from the perspective of women’s experiences.16
Rather than assert that problematic or racist results are impossible to correct, in the ways that the Google disclaimer suggests,17 I believe a feminist lens, coupled with racial awareness about the intersectional aspects of identity, offers new ground and interpretations for understanding the implications of such problematic positions about the benign instrumentality of technologies. Black feminist ways of knowing, for example, can look at searches on terms such as “black girls” and bring into the foreground evidence about the historical tendencies to misrepresent Black women in the media. Of course, these misrepresentations and the use of big data to maintain and exacerbate social relationships serve a powerful role in maintaining racial and gender subjugation. It is the persistent normalization of Black people as aberrant and undeserving of human rights and dignity under the banners of public safety, technological innovation, and the emerging creative economy that I am directly challenging by showing the egregious ways that dehumanization is rendered a legitimate free-market technology project.
I am building on the work of previous scholars of commercial search engines such as Google but am asking new questions that are informed by a Black feminist lens concerned with social justice for people who are systemically oppressed. I keep my eye on complicating the notion that information assumed to be “fact” (by virtue of its legitimation at the top of the information pile) exists because racism and sexism are profitable under our system of racialized capitalism. The ranking hierarchy that the public embraces reflects our social values that place a premium on being number one, and search-result rankings live in this de facto system of authority. Where other scholars have problematized Google Search in terms of its lack of neutrality and prioritization of its own commercial interests, my critiques aim to explicitly address racist and sexist bias in search, fueled by neoliberal technology policy over the past thirty years.
Black Feminism as Theoretical and Methodological Approach
The commodified online status of Black women’s and girls’ bodies deserves scholarly attention because, in this case, their bodies are defined by a technological system that does not take into account the broader social, political, and historical significance of racist and sexist representations. The very presence of Black women and girls in search results is misunderstood and clouded by dominant narratives of the authenticity and lack of bias of search engines. In essence, the social context or meaning of derogatory or problematic Black women’s representations in Google’s ranking is normalized by virtue of their placement, making it easier for some people to believe that what exists on the page is strictly the result of the fact that more people are looking for Black women in pornography than anything else. This is because the public believes that what rises to the top in search is either the most popular or the most credible or both.
Yet this does not explain why the word “porn” does not have to be included in keyword searches on “black girls” and other girls and women of color to bring it to the surface as the primary data point about girls and women. The political and social meaning of such output is stripped away when Black girls are explicitly sexualized in search rankings without any explanation, particularly without the addition of the words “porn” or “sex” to the keywords. This phenomenon, I argue, is replicated from offline social relations and deeply embedded in the materiality of technological output; in other words, traditional misrepresentations in old media are made real once again online and situated in an authoritative mechanism that is trusted by the public: Google. The study of Google searches as an Internet artifact is telling. Black feminist scholars have already articulated the harm of such media misrepresentations:18 gender, class, power, sexuality, and other socially constructed categories interact with one another in a matrix of social relations that create conditions of inequality or oppression.
Black feminist thought offers a useful and antiessentializing lens for understanding how both race and gender are socially constructed and mutually constituted through historical, social, political, and economic processes,19 creating interesting research questions and new analytical possibilities. As a theoretical approach, it challenges the dominant research on race and gender, which tends to universalize problems assigned to race or Blackness as “male” (or the problems of men) and organizes gender as primarily conceived through the lenses and experiences of White women, leaving Black women in a precarious and understudied position. Popular culture provides countless examples of Black female appropriation and exploitation of negative stereotypes either to assert control over the representation or at least to reap the benefits of it. The Black feminist scholar bell hooks has written extensively on the ways that neoliberal capitalism is explicitly implicated in misrepresentations and hypersexualization of Black women. hooks’s work is a mandate for Black women interested in theorizing in the new media landscape, and I use it as both inspiration and a call to action for other Black women interested in engaging in critical information studies. In total, this research is informed by a host of scholars who have helped me make sense of the ways that technology ecosystems—from traditional classification systems such as library databases to new media technologies such as commercial search engines—are structuring narratives about Black women and girls. In the cases I present, I demonstrate how commercial search engines such as Google not only mediate but are mediated by a series of profit-driven imper
atives that are supported by information and economic policies that underwrite the commodification of women’s identities. Ultimately, this book is designed to “make it plain,” as we say in the Black community, just exactly how it can be that Black women and girls continue to have their image and representations assaulted in the new media environments that are not so unfamiliar or dissimilar to old, traditional media depictions. I intend to meaningfully articulate the ways that commercialization is the source of power that drives the consumption of Black women’s and girls’ representative identity on the web.
While primarily offering reflection on the effects of search-engine-prioritized content, this research is at the same time intended to bring about a deeper inquiry and a series of strategies that can inform public-policy initiatives focused on connecting Black people to the Internet, in spite of the research that shows that cultural barriers, norms, and power relations alienate Black people from the web.20 After just over a decade of focus on closing the digital divide,21 the research questions raised here are meant to provoke a discussion about “what then?” What does it mean to have every Black woman, girl, man, and boy in the United States connected to the web if the majority of them are using a search engine such as Google to access content—whether about themselves or other things—only to find results like those with which I began this introduction? The race to digitize cultural heritage and knowledge is important, but it is often mediated by a search engine for the user who does not know precisely how to find it, much the way a library patron is reliant on deep knowledge and skills of the reference librarian to navigate the vast volumes of information in the library stacks.
Algorithms of Oppression Page 4