Book Read Free

Brotopia

Page 26

by Emily Chang


  To be clear, men do get harassed online, but women experience the more extreme forms, such as rape threats, death threats, and stalking. Studies show that men are far more likely to be called out or belittled because of their sports affiliations, while women are far more likely to be attacked simply because of their gender. Girls are also disproportionately among the victims of cyber bullying. Young women, particularly those aged eighteen to twenty-four, are three times as likely to be sexually harassed online. As one feminist researcher put it, “Rape threats have become a sort of lingua franca—the ‘go-to’ response for men who disagree with something a woman says.” Perhaps that “go-to” aspect is why law enforcement doesn’t take such threats very seriously. But women do: 38 percent of women who have been harassed online describe their experience as “extremely upsetting,” as opposed to 17 percent of men.

  For many women, especially those in the public eye, the hate being thrown around means the internet at large has become a place where they feel unwanted. Marissa Mayer told me she took a monthlong break from Twitter while she was running Yahoo because “it was just so negative.” In the summer of 2016, the Saturday Night Live star Leslie Jones tweeted, “I feel like I’m in a personal hell,” after she was swamped with racist and sexist attacks sparked by her appearance in the all-female Ghostbusters remake. She also took a break from Twitter, but before leaving, she wrote, “Twitter I understand you got free speech I get it. But there has to be some guidelines . . . You can see on the profiles that some of these people are crazy sick. It’s not enough to freeze Acct. They should be reported.”

  The message of these negative, upsetting episodes is this: Women, you’re not welcome here. And if you choose to show up anyway, be prepared to live with any harassment that comes your way.

  I know this from personal experience.

  As a journalist, I regularly use social-media sites such as Twitter and Facebook to promote my stories and interviews. They are invaluable platforms for distribution and constructive feedback. However, I often find myself on the receiving end of messages that are obnoxious, dirty, and sometimes downright frightening. One user, who stalked me on Twitter for several months, suggested taking me to a warehouse “for a whipping,” eating his “high-quality sperm” and tweeted a hard-core pornographic video at me with the words “Submission Time.” He also mentioned my husband by name and suggested they have sex with me together. “Any boy that penises you gets my support,” the troll wrote. And when I was pregnant, the user tweeted, “Obeying me is a good thing, looks like you’re pregnant with my lil girl in belly.” The cherry on top: when the troll responded inappropriately to a tweet in which I had tagged IBM CEO Ginni Rometty, after an interview I had conducted with her, Rometty herself was alerted with several cheerful notifications from Twitter.

  I’ve developed the requisite thick skin, and I use a common tactic for dealing with trolls: ignoring them. I quickly scroll past the vitriolic direct replies to my Twitter account, and I never, ever use Reddit. Once an interview I conducted with Apple’s co-founder Steve Wozniak ended up on Reddit, and the response was worse than unnerving. (For the same reason, many women in tech avoid using Hacker News, the prominent start-up incubator YCombinator’s official bulletin board that has since become one of the industry’s leading message boards; the trolls are there too.) Most important, I don’t respond to the haters. This is accepted wisdom among many female users: the worst way to deal with a troll is to poke it. Though sometimes the words disturb me, I do my best not to let them make me feel like any less of a journalist, a person, or a woman.

  But the internet shouldn’t just be for people with thick skin. And being a woman online shouldn’t be accompanied by routine threats of sexual assault.

  I reported my personal troll to Twitter in March 2017, after the company claimed, yet again, to have improved its harassment controls. Just twelve hours after filing my report, however, I received this message from Twitter: “We reviewed your report carefully and found that there was no violation of Twitter’s Rules regarding abusive behavior.” Twitter’s “rules” state that “you may not incite or engage in the targeted abuse or harassment of others.” If telling me to eat his “high-quality sperm” and inviting me to a whipping doesn’t count as harassment, what does? Sure, I can mute the account or block it, but all of these tweets are still visible to the public, and this troll can easily set up a new account and start attacking me again. It appears that my troll hasn’t tweeted from this particular account for some time. When I asked Twitter for more information about why, the company told me it doesn’t comment on individual cases. All of the offensive tweets I have referenced still live online. It feels as if Twitter is telling me, “Just deal with it” or, worse, “You’re not worth fixing this.” Apparently, I—and so many other women—aren’t worth alienating one extremely offensive user.

  In a telling example of just how crudely Twitter’s rules can be applied, actress Rose McGowan’s account was suddenly suspended in October 2017 as she was in the midst of tweeting allegations that Hollywood heavyweight Harvey Weinstein had raped her. In keeping with its policy not to comment on individual accounts, Twitter did not explain why, then faced an epic backlash. Actress Anna Paquin called for women to boycott Twitter, and countless women rallied their accounts behind the cause.

  Twitter later broke its own rule and explained that McGowan’s account had been temporarily locked because she had tweeted a private phone number. (The number was contained in the signature of an email image McGowan had tweeted as proof that the others at the Weinstein company were aware of his behavior.) While Twitter seems often reluctant to act on behalf of users who have been abused, this is one prominent case in which Twitter was remarkably responsive to censoring someone who was trying to out an abuser. These seemingly inexplicable decisions might be explained in part by a closer look at how offensive content is handled once it is reported. The social networks, including Twitter, outsource most content moderation to contractors around the world. While there’s hope that technology, with the help of artificial intelligence, might be able to enforce rules more consistently in the future, for now, the task is up to humans. The contractors faced with the difficult job of filtering and flagging disturbing content on these networks generally don’t last long and must constantly be retrained, yet wield an inordinate amount of power when it comes to deciding what stays up and what comes down. Their decisions, informed or not, greatly impact people’s lives, whether they be myself, Brianna Wu, Leslie Jones, or Rose McGowan.

  McGowan’s account was reinstated and Twitter promised to be more transparent about how it makes such decisions in the future. “Today we saw voices silencing themselves and voices speaking out because we’re *still* not doing enough,” CEO Jack Dorsey tweeted.

  THE BUSINESS CASE FOR SCRAPPING THE HATE

  Over the years, many social-media executives might have assumed that combating trolls could be bad for the bottom line. To be seen as silencing free speech can be a rallying cry for boycotts and cyber attacks. After all, traffic from trolls is still traffic; who wants to drive users away? However, we may be at an inflection point. It seems increasingly likely that not combating harassment might be even worse for business.

  Today, both Reddit and Twitter are fighting to attract not only new users but advertisers, who have become wary of being associated with less-than-mainstream content. The most famous example: When big-name companies including Mercedes-Benz, Johnson & Johnson, Verizon, and JPMorgan discovered, early in 2017, that some of their YouTube ads were running next to neo-Nazi and jihadist videos, they all suspended or pulled advertising from Google. Most returned, but only after the company made changes, including doing a much better job of flagging offensive content by hiring more people and deploying “machine learning tools” (a form of artificial intelligence) to deal with the problem. Ad crisis averted, but the market spoke clearly: hate is bad for business. Google’s actions spoke clearly too:
they showed that companies can indeed change, when sufficiently motivated.

  As Twitter has sought to gain broader adoption, it too has tried to change—not always successfully. In 2015, Costolo stepped down as CEO of Twitter. He was replaced by Twitter co-founder Jack Dorsey, under whose leadership additional steps have been taken to reduce harassment. The network rolled out a new filter that would prevent users from seeing offensive or threatening content, and says it works harder to identify accounts that were obviously spawned to harass others. It’s added tools to mute and report hateful speech, tweaked search to hide abusive tweets, and says it is cracking down harder on repeat offenders. In 2017, Twitter said it was taking action against ten times more accounts than it had the year before (although, somehow it seems, my troll is not one of them).

  Meanwhile, though Twitter and Reddit still have a fairly large user base, they have been left in the dust by the behemoth that is Facebook. While Facebook was inspired, in part, by the sexist “Hot or Not” ratings site, it has gone on to become a social-networking site that is, by comparison, friendly to a diverse range of users. In the process, Facebook has attracted over two billion users and, along with them, billions of advertising dollars.

  Don’t get me wrong: Facebook isn’t perfect. The social network has a long way to go to combat online hate, both on the main site and on Instagram, which it owns. But cyber hate is a far bigger, more visible problem on Reddit and Twitter than it is on Facebook and Instagram. Because my Facebook account is private and I have to accept friends before they can interact with me, I almost never see hurtful comments. Even when I do post publicly, the responses are rarely vile. Perhaps that has something to do with the “real names” requirement. But it also has to do with the way the site has been architected to balance product and business concerns.

  Facebook insiders say that Sheryl Sandberg, who joined the team in 2008, was critical in transforming the hugely successful start-up into an equally enormous business. Part of that involved developing policies to ensure that Facebook was a safe, hospitable place for both users and advertisers. Sandberg’s influence at Facebook goes some way to answering the question of whether major social-media sites might have benefited by having more diverse and inclusive leadership.

  “Sheryl is, and I think Mark would agree, probably the most important decision he ever made,” former Facebook mobile director Molly Graham tells me. When Sandberg showed up in 2008, the social network, which then had just 66 million users, was having a dark year amid a storm of privacy issues and a nearly nonexistent business model.

  “Facebook’s success was not an inevitability,” Graham says. Not only did Sandberg help compel immediate changes to company culture; she also took strong stands on the side of user privacy and protections. This was about the same time that Mark Zuckerberg was becoming obsessed with Twitter’s growing user base and was considering a series of changes that would have taken Facebook down a very different path.

  In the months after Sandberg came aboard, the fledgling Twitter was dominating live conversation on the web and getting international traction. “He was trying to decide why Twitter was so successful,” a former Facebook employee tells me. “He got obsessed with openness and how much data they had and why are they owning real-time news? He fixated on this idea that people are actually willing to share more openly than we think they are.” Zuckerberg proposed several product tweaks designed to push Facebook users to be even more open in the hopes of driving engagement.

  After Facebook introduced location tagging, for example, users could tag others at the same location, but those tagged users couldn’t untag themselves. I could have said that Mark Zuckerberg was in Las Vegas, but he wouldn’t have been able to say, “Uh, no, I’m actually in Palo Alto.” Zuckerberg wanted to extend the same rules to photos, such that if someone tagged another user in a photo, again that second user couldn’t then untag himself or herself. Several other Facebook executives including Sandberg and Facebook product head, Chris Cox, were against these changes, feeling they were unfriendly not only to users but to women especially. “The obvious example is you’re a woman, someone tags you in something offensive or not related to you, and you can’t untag yourself,” the former Facebook employee tells me. “It was a huge fight inside the company. Massive, teardown walls.” Ultimately, Zuckerberg’s photo untagging proposal never came to fruition, and the ban on location untagging was removed. “Before, when bad things happened, nobody had anybody to go to when weird decisions were made,” the former Facebook employee says. “Sheryl made every voice that was diverse stronger because now they had a place to go.”

  To his credit, Zuckerberg—though he might have had a few harebrained ideas—was also willing to listen. “More than anyone I’ve ever met, he has this infinite capacity to learn and change. He has as many flaws as anybody, but I’ve never met anyone who is so open to change,” former Facebook CTO Bret Taylor tells me. The key is Zuckerberg made as much space for Sandberg as she made for him, employees say.

  Both Zuckerberg and Sandberg cared very much about making sure Facebook was a free but safe community, but they approached difficult decisions on content differently. Zuckerberg was more likely to consider how issues might affect the broader platform, whereas Sandberg encouraged employees to think about the effect on individuals. “She came at it from more of an individual person’s perspective, the empathetic, how might this person feel? These are human beings, in their bedrooms, in their dorm rooms, reacting to something that causes them emotional trauma,” a former Facebook executive told me.

  Sandberg oversaw the operations team as they refined a detailed set of policies that would guide Facebook’s stance on certain kinds of content on a massively complex scale—what gets left up, and what gets taken down—everything from Holocaust deniers to the Arab Spring to offensive satire. “There were a couple of content issues like rape jokes and violence toward women—any sort of content directed at women—it’s obviously a topic she cares a lot about,” the former Facebook executive said. “And she had a really big impact on helping the company and helping us get to better decisions on that stuff. Sheryl’s job was to push on us when she felt we didn’t get it right.”

  Facebook still has an uphill battle to fight offensive and disturbing content that is only getting bigger as the site becomes more influential. In 2017, it added three thousand people to the forty-five hundred already employed to moderate content worldwide. This, as Facebook was roiled by the launch of its video service, Facebook Live, which soon became home to broadcasts of real-time rapes, beatings, suicides, and incidents of police brutality. But the biggest reckoning came later that year when Facebook revealed that the Russians bought thousands of ads on the social network in an attempt to cause political turmoil amid the US elections. Twitter and Google were quickly roped into the scandal. All three companies were called on to testify before Congress about their policies concerning not just political advertising but also fake accounts and fake news. Facebook announced that it was hiring an additional ten thousand people to handle safety and security. In an interview with Axios, Sandberg apologized to the American people, but also reasserted Facebook’s commitment to free speech and consideration of itself as a tech company, not a media company.

  Facebook, Twitter, and Google, via YouTube, profit off the content that the public provides. This content includes everything from fake news to postings that might be hateful or abusive. But 2017 might well be seen as the turning point, the moment when these internet juggernauts began to take greater responsibility for the substance of the messages, ads, and news they facilitate. No doubt there are heated internal debates happening within Facebook at this very moment, about how to continue to build an online community where people feel both safe and free. The question remains how the company rises to that responsibility.

  HOW HARASSERS FIGHT FOR SURVIVAL

  Culture change can’t be guaranteed by simply putting a woman at the top o
f the org chart. Changing the moral tone and community standards at a social media company can be particularly tricky because users feel ownership over the site—and rightly so, as they are producing the content. Case in point is the story of Ellen Pao’s second epic setback in Silicon Valley. In 2014, just months before she would lose her famous sex discrimination case against Kleiner, Pao was appointed interim CEO of Reddit, where she swiftly tried to crack down on harassment, only to resign, under pressure from users, after just eight months on the job.

  Reddit, the so-called front page of the internet, was founded in 2005 by Steve Huffman and Alexis Ohanian, two young male entrepreneurs. They tell me they started the site as a place to have “authentic conversation.” If the optimism of that statement reminds you of Twitter’s free-speech philosophy, here’s something else the two sites share: user pseudonymity. Reddit quickly became a popular destination to discuss everything from puppies to politics, with more than 330 million monthly users to date, who became known as Redditors. But like Twitter, Reddit also became a haven for users spewing misogyny, racism, homophobia, and xenophobia, which made it difficult for the network to develop into a legitimate business. Ellen Pao agreed to take on the challenge of leading Reddit in the hope of making it—and the internet—“a better place for everyone.” At the same time, co-founder Alexis Ohanian, who had left the company in 2010, returned to help, with the title of executive chairman.

  Pao (who got plenty of up close and personal online attacks from trolls during her suit against Kleiner) made it her top priority to clean up the site, first committing to remove revenge porn—explicit photos, usually of ex-girlfriends, that are posted without the subjects’ consent. She also shut down several of Reddit’s nastiest sub-forums, including antitrans and antiblack communities as well as one called “Fat People Hate,” in which users mocked the overweight.

 

‹ Prev