Book Read Free

Think Black

Page 22

by Clyde W. Ford


  Search engine providers can suppress or elevate content. Sites such as American Renaissance and Infowars can also manipulate their own page ranking through search engine optimization (SEO). Google bombing, Google washing, content reoptimization, linkable asset development, redirection management, accelerated mobile pages, and content personalization are just some of many SEO strategies that can trick a search engine’s algorithm into ranking a site higher, thereby making a user believe more strongly in the information the site contains. Political parties can create websites with fake news on their opponents, and then through SEO they can cause those pages to rank high in search results.

  Among many “dirty tricks,” Russian interference in the 2016 elections escalated such manipulation to new heights. The Internet Research Agency, the vanguard of Russia’s attempt to undermine public trust in the American electoral system, created “sleeper” websites that posted real local news to build trust, credibility, and readership for future unforeseen actions.

  “They set them up for a reason,” said Bret Schafer, an analyst for the Alliance for Securing Democracy. “And if at any given moment, they wanted to operationalize this network of what seemed to be local American news handles, they can significantly influence the narrative on a breaking news story. But now instead of just showing up online and flooding it with news sites, they have these accounts with two years of credible history.”12

  Twitter states that any such Russian sites created through its service were identified and shut down. But Twitter is just one of many avenues for building and publishing such surreptitious sites.13

  Though my father, and other systems engineers of his generation, sought to lay the groundwork for the ultimate public forum and arbiter of democratic discourse, digital technology has rendered the internet an unsafe and unfit venue for conversations about race that might lead to meaningful progress. I fear that if my father were alive today and searched the internet for information about Black-on-White crime, his deep-seated beliefs in his own inferiority, because of his skin color, would simply be confirmed.

  But the technology my father helped usher into the world and the problems this technology poses to improved race relations are more profound and more pervasive than the order of results on a search page. The 2016 American presidential election lays bare what happens when smartphones and social media are deliberately used to accentuate racial fissures.

  The Pew Research Center reported that Black voter turnout declined sharply in the 2016 presidential election for the first time in twenty years, down from a high of 66.6 percent in 2012 to 59.6 percent in 2016, a decline of nearly 765,000 in the number of Black voters. More troubling, voter turnout increased among millennials with the exception of Black millennials, whose turnout actually decreased by nearly 6 percent.14 In 2016, Black voter suppression worked—whether from Russian trolls, Republican hijinks, or frustration with the pace of racial relations—well enough to shift the results of the presidential election. And this outcome did not escape then president-elect Donald Trump, who said to an all-White crowd at a postelection rally in Hershey, Pennsylvania: “They didn’t come out to vote for Hillary. They didn’t come out. And that was big—so thank you to the African-American community.”15

  Why did they not come out to vote? One need look no further than Black Lives Matter.

  Black Lives Matter (BLM) is a critically important, international activist movement that originated in Black communities in America in response to racial profiling, police brutality, and racial inequity in the criminal justice system. From Trayvon Martin to Eric Garner to Michael Brown and the many other Black people, mostly young, killed by police, BLM rose in prominence as an organization ready to take direct action and bring these injustices to the forefront of the national discussion on race and racial relations. While paying homage to a history of social justice activism in Black communities across the country, BLM also sharply distinguished itself from past movements through a decentralized, nonhierarchical leadership structure and through inclusion of those traditionally on the margins of past freedom movements: the Black disabled, the undocumented, women, LGBTQ individuals, and all those along the gender spectrum.

  Black Lives Matter is inseparable from smartphones and social media. Smartphones allowed police brutality and killings to be recorded and broadcast worldwide via social media networks. In response to Trayvon Martin’s killing, Alicia Garza authored a Facebook post saying, “black people. I love you. I love us. Our lives matter,” to which Patrisse Cullors replied, “#blacklivesmatter.” A third woman, Opal Tometi joined them to create Black Lives Matter as an online activist campaign. The movement uses social media to organize protests, demonstrations, and other forms of social activism. Twitter and Facebook also serve as online platforms to provide BLM activists with a shared set of principles and goals, as well as places to hold discussions on shared beliefs.

  “Not your grandmother’s civil rights movement” is how many BLM activists like to describe themselves. Yet this departure from the past, while a source of strength for some, is also a huge liability. In his federal indictment against the Internet Research Agency and thirteen Russian defendants, Special Counsel Robert Mueller states clearly that in the latter half of 2016, these defendants “began to encourage U.S. minority groups not to vote in the 2016 U.S. presidential election or to vote for a third-party U.S. presidential candidate.”16

  Russia effected this manipulation of Black voters through memes. A meme is an idea, behavior, or style that spreads from person to person within a culture. Often condensed into short, pithy sayings, memes spread virally through social media networks. “Woke Blacks,” an Instagram account created by user “IRA,” posted the following meme: “[A] particular hype and hatred for Trump is misleading the people and forcing Blacks to vote Killary. We cannot resort to the lesser of two devils. Then we’d surely be better off without voting AT ALL.”17

  This sentiment, injected through social media into the Black activist community by Russia, fanned the flames of a simmering, preexisting discontent. “Neither party has stepped to the front and made Black Lives Matter a priority,” said BLM activist Hawk Newsome, who condensed this meme down to the slogan “I Ain’t Voting.”18 Passed along on Facebook and Twitter, many other young Blacks followed suit.

  Two years after the 2016 presidential election, the NAACP got involved. In December 2018, the frontline civil rights organization called for a weeklong boycott of Facebook because of the tech giant’s complicity in successful attempts to target Black voters and to keep them from the polls. But the NAACP’s efforts were simply too little, and much too late. While boycotts may have been effective in the 1960s in Birmingham or Montgomery or Selma, a weeklong boycott of Facebook represents a strategy mismatched to the nature of this threat, which comes from the intersection of technology and race. Those with a deep understanding of technology, like the Russian Internet Research Agency, exploited that technology to disenfranchise Black voters. Those without such a deep understanding of technology, like the NAACP or Black Lives Matter, simply played into the hands of this new technology of voter suppression.

  Technology misguided, misused, and misunderstood is now a central obstacle toward progress in racial relations.

  Once, engineers like my father dreamed that computers would unleash the very best of human nature, and many like those in charge of Google, Microsoft, Facebook, and Twitter may still hold fast to that belief. But digital technology has also become a platform for the very worst in humanity, as IBM’s history so clearly shows. Tech companies, after all, were never in the business of altruism; instead, they have always sought to shore up their bottom lines to please their investors. Google, Microsoft, and Yahoo cannot provide citizens with unbiased information about critical issues regarding race. Facebook and Twitter cannot host open public discourses that ultimately bring people together across the racial divide.

  While digital technology, a ubiquitous force in modern life, has a role to play in advancing r
ace relations, that role cannot be left up to the very companies that create the technology we use. If the history of technology and race tells us anything, it is that technology needs to be utilized as deftly by the most marginalized and least privileged as it is by those in power. In today’s world, activists for racial and social justice need to focus as much attention on Google, Microsoft, Apple, Facebook, and Twitter, and the many other tech companies whose products and services they continually use, as they do on the police and the criminal justice system, whose unjust policies and practices they rightly oppose.

  But my father and his IBM brethren had little way of knowing all of this in 1964. What they knew themselves, and what the Information Machine promised the general public, is that computers had become fast enough and smart enough to be trained on a new class of problems: problems that no longer involved just relationships between numbers, but relationships between people.

  From voting to employment to housing to criminal justice, in the mid-1960s, at that very point in history when progress was being made in regulating and legislating how human beings made decisions regarding race, men like my father were busy laying the foundations for migrating these decision-making processes away from people and into machines. Algorithms, as the Information Machine promised visitors, would soon be the ultimate impartial arbiters of human affairs, ready to solve even the most complex problems too massive to be left to biased human beings.

  Fifty years after the Information Machine, algorithms, sets of rules governing the analysis of raw data to make decisions, have become the arbiters of human affairs, yet algorithmic decisions are anything but impartial, and they are clearly not color-blind.

  The Voting Rights Act, signed into law by President Lyndon B. Johnson in 1965, intended to overcome state and local legal barriers that prevented Black Americans from exercising their right to vote. Prior to the act’s passage, it was not uncommon in the South for election officials to turn away Blacks attempting to vote by saying they’d gotten the date, time, or polling place wrong. Blacks, even those possessing college degrees, were required to “recite the entire Constitution or explain the most complex provisions of state laws,”19 a task few White voters could accomplish.

  Fast-forward fifty years, and the threat to voting stems not from silly requirements but from voter ID laws and sophisticated redistricting algorithms capable of analyzing and carving up legislative districts to suppress Black voting power. After the US Supreme Court gutted key provisions of the Voting Rights Act in 2013, southern states got busy on new ways of suppressing or diluting the Black vote. The Republican-controlled North Carolina legislature, for example, ordered the state board of elections to run a mapping algorithm to determine the racial composition of the 2012 vote. Based on the data from that algorithm, the North Carolina legislature passed a voter ID law targeting Black voters.20

  Who’s making hiring decisions? “For more and more companies, the hiring boss is an algorithm,” a 2012 article in the Wall Street Journal notes.21 Algorithms that screen applicants for call-center and fast-food jobs find that distance from employment and current employee referrals are two of the most significant factors in employee retention and turnover rates. The problem with these metrics is that they’re closely tied to race.22

  Credit scores are also prime examples of biased algorithms at work. They encode the past, and if that past includes practices such as high-cost, predatory loans that disproportionately targeted communities of color, Black and Latino consumers are more likely than their White counterparts to have damaged credit that shows up in credit scores.23

  An in-depth study of the predictive policing software PredPol revealed that algorithms send police into Black communities more often than White communities.24 If an officer sent to a neighborhood makes an arrest, the algorithm rates this neighborhood as a more likely candidate for future criminal activity and recommends more frequent deployment of police, regardless of the underlying crime rate. More police in a neighborhood leads to more arrests, regardless of the nature of the offenses, which leads to more police in the neighborhood. This sets up what researchers called a “runaway feedback loop,” re-creating the very kind of racial bias such predictive policing algorithms were meant to overcome.

  At the other end of the criminal justice system, ProPublica showed that widely used sentencing software contained an algorithm that produced sharp racial disparities in forecasted recidivism rates.25

  When Microsoft released Tay, an artificial intelligence social media chatbot built to interact and learn from Twitter users, within a span of twenty-four hours the bot went from benign to anti-Semitic, racist, and misogynistic. When asked, “Did the Holocaust happen?” Tay replied, “It was made up.” Then Tay tweeted statements like “Hitler was right I hate the Jews.” When asked about Black Lives Matter activist DeRay Mckesson, Tay suggested, “like @deray should be hung!” And when questioned about women, Tay offered this opinion: “I fucking hate feminists and they should all die and burn in hell.”26

  What’s going on?

  While their creators seek to craft them as some sort of digital gods, algorithms actually learn from the profane experience of humans. When asked about the perfect search engine, Google cofounder Sergey Brin said, “It would be like the mind of God.”27 Yet even Brin, though he might try, cannot endow Google’s algorithms with God’s mind. To learn, to act artificially intelligent, algorithms must be trained. Most often, they are trained on copious amounts of information from the internet or other sources. That information contains what humans have accomplished, decided, acted upon, and thought in the past—not just in moments of enlightenment but at times of debasement and depravity as well. Ultimately, algorithms, though they are used for future decisions, are founded on decisions rendered by human beings in the past. Here, GIGO, that programming adage first used in my father’s time, surely applies: garbage in, garbage out. Or more precisely, racism in, racism out.

  Using city planning as an example, Think—the main film shown in IBM’s pavilion at the 1964 World’s Fair—informed viewers that all the complex social and economic problems of modern life were interrelated, that making a change in one aspect of a complex problem often produced unexpected results in another. Think promised that computers could be used to build models that would help determine the relative merits of alternative strategies to solve these problems. The film is one of the earliest records of preparing the public for a day when software algorithms would take over from human decision makers, and the bounty humans would then reap. That day is upon us, but the bounty, certainly as it regards the complex problem of racial relations, is far from being reaped.

  Given this realization, I often ask myself what my father would do.

  With impunity, algorithm makers can lower their sights from creating the mind of God to building bias-free code. Hiring can favor Black engineers and those of other marginalized groups. Training can bring to light the historical and cultural issues that give rise to biased code. Data used to train algorithms can be scoured for embedded bias. Quality assurance can be expanded to include running tests on users of all demographics and correcting biased results prior to product release.

  Activists also have a key role to play. They can give themselves impeccable training in all aspects of digital literacy. They can insist on digital literacy curricula in the K–12 classrooms of all communities, especially marginalized communities and communities of color. They can use social media to organize actions but not to conduct meaningful discourse on values and beliefs. They can employ alternatives to standard social media for communication during protests and civil actions. They can promote software and algorithms already vetted as bias-free. They can align themselves with groups working to evaluate and hold algorithm makers accountable for biased software. And they can develop in-house cyber teams capable of responding defensively and proactively to the full array of cyber threats.

  But even more important than changing technology is recalling why my father ran a cove
rt operation from his dresser drawer in the first place. It’s not simply about altering a search result here or there, or about modifying an algorithm to reduce bias, or even about developing better digital literacy. Only when people of color and other minorities ascend to the highest levels of decision-making and power in technology companies will the systemic changes required to help end racism and bias actually take place. People of color and other minorities are not only underrepresented in the technology workforce, they are also underrepresented in the c-suites and the boardrooms of technology firms.

  Even the above recommendations are not enough to bring about a much-needed change in racial relations. “Certainly, if the problem is to be solved,” Martin Luther King said, “then in the final sense hearts must be changed.”28

  My father and the IBM Information Machine had it right: digital technology can benefit human problems, including racial relations. Algorithms can be reimagined, re-created, and revised. Software bias can be exposed, eliminated, and excluded. They also had it wrong: all of these technological changes can be instituted, yet racial insensitivity, intolerance, and injustice will remain. Technology, when used right, is a means leading to progress in racial relations, not an end; it’s one bridge on a journey toward justice, not a pin marking a final destination.

  While Think played to millions on fairgrounds built upon dumping grounds, Freedom Summer took place in the South. While viewers relaxed in the air-conditioned comfort of IBM’s “egg,” James Chaney, Andrew Goodman, and Michael Schwerner were murdered in the hot backwoods of Mississippi. And while visitors waited in line to enter the IBM pavilion in Queens, marchers locked arms on Bloody Sunday to cross the Edmund Pettus Bridge in Selma. King’s letter from Birmingham’s city jail poignantly captures this tortured juxtaposition:

  I am cognizant of the interrelatedness of all communities and all states. . . . Injustice anywhere is a threat to justice everywhere. We are caught in an inescapable network of mutuality, tied in a single garment of destiny. Whatever affects one directly, affects all indirectly.29

 

‹ Prev