Algorithms of Oppression

Home > Nonfiction > Algorithms of Oppression > Page 13
Algorithms of Oppression Page 13

by Safiya Umoja Noble


  I was submitted to isanyoneup.com by my ex-boyfriend. I am confronted by friends, family and strangers that they have seen me naked online everyday. . . . You may think it’s funny but sometimes [I] don’t want to leave my house and go to the mall with my family because I fear somebody will come up to me while I’m with my mother and mention it. My sisters . . . are ashamed to be related with me and want to lie to their friends that they are my sisters. I am a disgrace to my family. . . . My self worth has gone out the window and I worry I may never get it back. This keeps me one step away from happiness every single day. I don’t know what to do anymore.3

  The circulation of sexually explicit material has prompted thirty-four states to enact “revenge porn” laws, or laws that address nonconsensual pornography (NCP), defined by the Cyber Civil Rights Initiative as the distribution of sexually graphic images of individuals without their consent.4 Laws currently range from misdemeanors to felonies, depending on the nature of the offense. On December 4, 2015, the first conviction under the California “revenge porn” law, of Noe Iniguez, was reported by the Los Angeles Times. Iniguez posted to Facebook a topless photo of his ex-girlfriend, including a series of slurs that included encouraging her employer to fire her.5 In December 2015, Hunter Moore of IsAnyoneUp.com was sentenced to two and a half years in prison after pleading guilty to “one count of unauthorized access to a protected computer to obtain information for purposes of private financial gain and one count of aggravated identity theft,” according to the Washington Post.6

  What does it mean that one’s past is always determining one’s future because the Internet never forgets?

  On the Right to Be Forgotten

  These cases in the U.S. are typical, but there are many scenarios that have prompted people to call for expanded protections online. In 2014, the European Court of Justice ruled in the case of Google Spain v. AEPD and Mario Costeja González7 that people have the right to request delisting of links to information about them from search engines, particularly if that information on the web may cause them personal harm. The pivotal legal decision was not without substantive prior effort at securing “the right to delete,” “the right to forget or be forgotten,” “the right to oblivion,” or “the right to erasure,” all of which have been detailed in order to better distinguish the rights that European citizens have in controlling information about themselves on the web.8 In 2009, the French government signed the “Charter of good practices on the right to be forgotten on social networks and search engines,”9 which stands as a marker of the importance of personal control over information on the web.10 Since then, considerable debate and pushback from Google has ensued, highlighting the tensions between corporate control over personal information and public interest in the kinds of records that Google keeps.

  At the center of the calls for greater transparency over the kinds of information that people are requesting removal of from the Internet is a struggle over power, rights, and notions of what constitutes freedom, social good, and human rights to privacy and the right to futures unencumbered by the past. The rulings against Google that support the “right to be forgotten” law currently affects only the European Union. Such legal protections are not yet available in the United States, where greater encroachments on personal information privacy thrive and where vulnerable communities and individuals are less likely to find recourse when troublesome or damaging information exists and is indexed by a commercial search engine. However, Google is still indexing and archiving links about people and groups within the EU on its domains outside of Europe, such as on google.com, opening up new challenges to the notion of national boundaries of the web and to how national laws extend to information that is digitally available beyond national borders. These laws, however, generally ignore the record keeping that Google does on individuals and organizations that are archived and shared with third parties beyond Google’s public-facing search results.

  I am not talking solely about the harmful effects of search results for groups of people. I am also concerned about the logic and harm caused by our reliance on large corporations to feed us information, information that ultimately leads us somewhere, often to places unexpected and unintended. In the case of the web results, this means communicating erroneous, false, or downright private information that one would otherwise not want perceived as the “official record” of the self on Google, the effects of which can be devastating. A difficult aspect of challenging group versus individual representations online is that there are no protections or basis for action under our current legal regime. Public records, of which web results can be included, whether organized by the state or vis-à-vis corporations, work in service of a privatized public good. Google and other large monopolies in the information and communications technology sector have a responsibility to communities, as much as they do to individuals. Currently, there is key legislation that challenges Google’s records of information it provides about individuals, much of which is being discussed through legislative reforms such as the “right to be forgotten” policies in the European Union,11 and new laws in the U.S. are emerging around “revenge porn.” These tensions need to be taken up by and for communities and groups, particularly marginalized racial minorities in the United States and abroad, whose collective experiences, rights, and representations are not sufficiently protected online. Search results are records, and the records of human activity are a matter of tremendous contestation; they are a battleground over the identity, control, and boundaries of legitimate knowledge. Records, in the form of websites, and their visibility are power. Ultimately, both individuals and communities are not sufficiently protected in Google’s products and need the attention of legislators in the United States.

  At a time when state funding for public goods such as universities, schools, libraries, archives, and other important memory institutions is in decline in the U.S., private corporations are providing products, services, and financing on their behalf. With these trade-offs comes an exercising of greater control over the information, which is deeply consequential for those who are already systematically oppressed, as noted by the many scholars I have discussed in this book. They are also of incredible consequence for young people searching for information and ideas who are not able to engage their ideas with teachers, professors, librarians, and experts from a broad range of perspectives because of structural barriers such as the skyrocketing cost of college tuition and the incredible burdens of student debt. If advertising companies such as Google are the go-to resource for information about people, cultures, ideas, and individuals, then these spaces need the kinds of protections and attention that work in service of the public.

  In the context of searching for racialized and gendered identities in Google’s search engine, the right to control what information or records can exist and persist is important. It is even more critical because the records are presented in a ranking order, and research shows that the public in the U.S. believes that search results are credible and trustworthy.12 As already noted in the previous chapters, Google exercises considerable discursive and hegemonic control over identity at the group and cultural levels, and it also has considerable control over personal identity and what can circulate in perpetuity, or be forgotten, through take-downs or delisting of bad information. Searches on keywords about minoritized, marginalized, and oppressed groups can yield all kinds of information that may or may not be credible or true, but they surface in a broader culture of implicit bias that already exists against minority groups. The right to be forgotten is an incredibly important mechanism for thinking through whether instances of misrepresentation can be impeded or stopped.

  Our worst moments are also for sale, as police database mug shots are the fodder of online platforms that feature pictures of people who have been arrested. This is a practice that disproportionately impacts people of color, particularly African Americans, who are overarrested in the United States for crimes that they may not be convicted of in court. New pla
tforms such as Mugshots.com and UnpublishArrest.com are services that promise, for a fee of $399 (one arrest) up to $1,799 (for five arrests), to remove mug shots from the Mugshots.com database across all major search engines. UnpublishArrest.com notes, “As a courtesy, when permanent unpublishing is chosen and information is unpublished for The Mugshots.com Database; requests will be submitted to Google to have the inactive links (dead links) and mugshots associated with the arrest(s) and Mugshots.com removed from the Google search results. Google results are controlled by Google and as such; courtesy Google submissions are not guaranteed nor are they part of the optional paid service provided. Google’s removal lead times average 7–10 days and can take as long as 4–6 weeks.”13 Proponents of this practice, including lawmakers and public-interest organizations, argue that this is a public safety issue and that the public has a right to know who potential criminals are in their communities. Opponents of the practice argue that it is a privacy issue and a matter that inflames the public, particularly people who are not found guilty but who appear guilty given the titillating nature of the pubic display of these photos.

  Research shows just how detrimental the lack of control over identity is. In the 2012 work of Latanya Sweeney, a professor of government and technology at Harvard University and the director of the Data Privacy Lab in the Institute of Quantitative Social Science at Harvard, she showed that Google searches on African American–sounding names are more likely to produce criminal-background-check advertisements than are White-sounding names.14 Time and again, the research shows that racial bias is perpetuated in the commercial information portals that the public relies on day in and day out. Yet, as I have noted in previous chapters, the prioritization and circulation of misrepresentative and even derogatory information about people who are oppressed and maligned in the larger body politic of a nation, as are African Americans, Native Americans, Latinos, and other peoples of color, is an incredible site of profit for media platforms, including Google. We need to think about delisting or even deprioritizing particular types of representative records. How do we reconcile the fact that ethnic and cultural communities have little to no control over being indexed in ways that they may not want? How does a group resolve the ways that the pubic engages with Google as if it is the arbiter of truth?

  The recording of human activity is not new. In the digital era, the recordings of human digital engagements are a matter of permanent record, whether known to people or not. Memory making and forgetting through our digital traces is not a choice, as information and the recording of human activities through digital software, hardware, and infrastructure are necessary and vital components of the design and profit schemes of such actions. The information studies scholars Jean-François Blanchette and Deborah Johnson suggest that the tremendous capture and storage of data, without plans for data disposal, undermines our “social forgetfulness,” a necessary new beginning or “fresh start,” that should be afforded people in the matter of their privacy record keeping. They argue that much policy and media focus has been on the access and control that corporations have over our personal information, but less attention had been paid to the retention of our every digital move.15

  The Edward Snowden revelations in 2014 made some members of the public aware that governments, through multinational corporations such as Verizon and Google, were not only collecting but also storing private records of digital activity of millions of people around the world. The threats to democracy and to individual privacy rights through the recording of individuals’ information must be taken up, particularly in the context of persistent racialized oppression.

  I foreground previous work about why we should be concerned about data retention in the digital world and the ways in which the previous paper-based information-keeping processes by institutions faced limits of space and archival capacity. These limits of space and human labor in organization and preservation presupposed a type of check, or “institutional forgetfulness,”16 that was located in the storage medium itself, rather than relating to policy limits on holding information for long periods of time. Oscar Gandy, Jr., aptly characterizes the nature of why forgetting should be an important, protected right:

  The right to be forgotten, to become anonymous, and to make a fresh start by destroying almost all personal information, is as intriguing as it is extreme. It should be possible to call for and to develop relationships in which identification is not required and in which records are not generated. For a variety of reasons, people have left home, changed their identities, and begun their lives again. If the purpose is non-fraudulent, is not an attempt to escape legitimate debts and responsibilities, then the formation of new identities is perfectly consistent with the notions of autonomy I have discussed.17

  These rights to become anonymous include our rights to become who we want to be, with a sense of future, rather than to be locked into the traces and totalizing effect of a personal history that dictates, through the record, a matter of truth about who we are and potentially can become. The record, then, plays a significant ontological role in the recognition of the self by existing, or not, in an archived body of information.18 In the case of Google, though not an archive of specific intent organized in the interest of a particular concern, it functions as one of the most ubiquitous and powerful record keepers of digital engagement. It records our searches or inquiries, our curiosities and thoughts.

  The record, then, in the context of Google, is never ending. Its data centers, as characterized in a recent YouTube video produced by Google,19 keep copies of our personal information on at least two servers, with “more important data” on digital tape. The video does not explain which data is considered most important, nor does it state how long data is stored on Google’s servers. In many ways, Google’s explanations about how it manages data storage speaks to and assuages the sensitivity to issues about Web 2.0 transactions such as credit card protections or secure information (Social Security numbers, passwords) transmitted over the Internet that might be used for online financial or highly private transactions.

  Google says,

  We safeguard your data.

  Rather than storing each user’s data on a single machine or set of machines, we distribute all data—including our own—across many computers in different locations. We then chunk and replicate the data over multiple systems to avoid a single point of failure. We randomly name these data chunks as an extra measure of security, making them unreadable to the human eye.

  While you work, our servers automatically back up your critical data. So when accidents happen—if your computer crashes or gets stolen—you can be up and running again in seconds.

  Lastly, we rigorously track the location and status of each hard drive in our data centers. We destroy hard drives that have reached the end of their lives in a thorough, multi-step process to prevent access to the data.

  Our security team is on-duty 24x7.

  Our full-time Information Security Team maintains the company’s perimeter defense systems, develops security review processes, and builds our customized security infrastructure. It also plays a key role in developing and implementing Google’s security policies and standards.

  At the data centers themselves, we have access controls, guards, video surveillance, and perimeter fencing to physically protect the sites at all times.20

 

‹ Prev