Book Read Free

The Cyber Effect

Page 15

by Mary Aiken


  Setting the minimum age for Facebook and Instagram at thirteen years is a data-protection requirement by law in the United States, but this doesn’t appear to be strictly enforced. Why? In terms of scale, Facebook has 1.65 billion active members (as of May 2016) who make one post a day on average, including the uploading of 300 million images. Could these companies monitor and police illegal use of the site? When asked, Simon Milner, a senior executive with Facebook, said that it would be “almost impossible.”

  Almost impossible. Interesting language. If you can’t police your own rules, then you might consider rethinking, revising, or removing them. Or simply closing down. That goes for the other social-networking sites as well. Patricia Cartes, head of Global Safety Outreach, Public Policy at Twitter, admitted that the company does not know which users are under thirteen. “We haven’t found a silver bullet in the online industry,” she said, “to meaningfully verify age.”

  Silver bullet? Like the one used to kill a werewolf?

  Facebook and other social networks have always claimed that it is difficult—or “almost impossible”—to identify a child, and therefore they can’t actively implement and police their own rules. But let’s think about this for a moment. When a kid opens up a Facebook account, the first thing he or she typically does is put up a profile photograph, and then “friend” a bunch of schoolmates who are usually the same age. They go on to post comments about school, classmates, and extracurricular activities. If you can’t figure out that these kids are nine or ten, you aren’t very smart. They are constantly providing photographic evidence of their age. Another piece of evidence that makes me suspect that these social-networking sites are not particularly interested in monitoring this problem: In 2016, Facebook awarded $10,000 to a ten-year-old boy from Finland, a coding ace who discovered a security flaw in Instagram. Won’t this only encourage more underage use?

  I would argue that, when it comes to minors, there is an urgent need to develop more effective ways of verifying the age of a new user on social networks. The real-world example would be a liquor store or a pub that’s not allowed to sell alcohol to underage individuals. Would it be okay if the salesclerk or barman didn’t believe it was necessary to ask for proof of legal age for drinking—or wanted the profits more than he wanted to obey the law?

  The psychologists and educators behind the large U.S. study in 2014 concluded that the results were troubling, particularly in regard to the developmental repercussions of children’s online habits. “Engaging in these online social interactions prior to necessary cognitive and emotional development that occurs throughout middle childhood could lead to negative encounters or poor decision-making. As a result, teachers and parents need to be aware of what children are doing online and to teach media literacy and safe online habits at younger ages than perhaps previously thought.”

  The Bystander Effect

  Obviously quite a number of parents are simply looking the other way. Perhaps they are quietly relieved, even proud, to see that their children are making “friends,” usually a sign of social thriving and happiness. I think they need reminding about how ramped up the cruelty can be online. If you think girls of middle school age have always been mean, you’ve not seen what they can do in the escalated environment of the Internet.

  Let’s remember the story of Sarah Lynn Butler, the vivacious and beautiful twelve-year-old girl who was voted queen of the upcoming fall festival at her school in Williford, Arkansas, in 2009. The seventh grader was “very happy” with the news, her mother told the media, and “always laughing and giggling and cutting up and playing around.” According to her mother, Butler had “lots of friends.”

  The problem was, after Butler was crowned, she began receiving mean messages on her MySpace page. Rumors circulated on the social network that she was really a “slut,” along with other nasty descriptions. When her mother saw Butler’s MySpace page and asked her daughter to talk about it, she was promptly removed from the friends list, and therefore denied access to her daughter’s page.

  Not long afterward, when the family left to run errands one afternoon, Butler asked to stay home. A browsing history revealed that she had logged on to her MySpace page and apparently seen the last message posted there, which said she was “just a stupid little naive girl and nobody would miss her.” When her family returned later that day, they found her dead. The twelve-year-old had hanged herself. Her suicide note said that she couldn’t handle what others were saying about her.

  The stories of self-harm, even suicide, are growing in number—and, of course, the subject of cyberbullying has become an international conversation. In a poll conducted in twenty-four countries, 12 percent of parents reported their child had experienced cyberbullying—which is defined as repeatedly critical remarks and teasing, often by a group. A U.S. survey by Consumer Reports found that 1 million children over the previous year had been “harassed, threatened, or subjected to other forms of cyberbullying” on Facebook.

  What is the explanation for it?

  In general, the younger you are, the number of friends you have on a social network increases. Let’s look at how the numbers work on Facebook, in a 2014 study of American users. For those over sixty-five years old, the average number of friends is 102. For those between forty-five and fifty-four years old, the average is 220. For those twenty-five to thirty-five years old, the average is 360. For those eighteen to twenty-four, the average is 649. What does that mean for the under-thirteens, the social media Invisibles? The answer is, Who knows? There are no reliable numbers.

  Let’s for a second discuss the sheer social madness of that. As the work of Robin Dunbar, a psychologist and anthropologist at the University of Oxford, has argued, primates have large brains because they live in socially complex societies. In fact, the group size of an animal can be predicted by the size of its neocortex, especially the frontal lobe. Human beings, too, have large brains because we tend to live in large groups.

  How large? Given the size of the average human brain, the number of social contacts or “casual friends” with whom an average individual can handle and maintain stable social relationships is around 150. (It is called Dunbar’s number.) This number is consistent throughout human history—and is the size of the modern hunter-gatherer societies, the size of most military companies, most industrial divisions, most Christmas card lists (in Britain, anyway), and most wedding parties.

  Anything much beyond Dunbar’s number is too complicated to handle at optimal processing levels.

  Now imagine the child who has a Facebook page and an Instagram account, who participates on Snapchat, WhatsApp, and Twitter. Throw into that mix all the mobile phone, email, and text contacts. A child who is active online, and interested in social media, could potentially have thousands of contacts.

  We are not talking about an intimate group of friends.

  We are talking about an army.

  And who’s in this army? These aren’t friends in any real-world sense. They don’t really know and care about you. They are online contacts—their identity and age and name potentially false. According to Dunbar, if children have grown up spending most of their social time online with thousands of these “friends,” they may not get enough real-world experience in handling social groups of any size, but particularly on a large scale—rendering them even less able to cope with real-world crowds. In other words, spending more time on social media can render children less competent socially, not more.

  In the real world, if five friends turn on you, it is bad enough. Now imagine your class of twenty turns on you, and then imagine the entire school of five hundred kids turns on you. It would be unbearable—and you would stay home “sick” and hide under the covers. But now, imagine one thousand of your social-network “friends” chanting and pointing at you. Not many eleven-year-olds have the social skills to deal with that. Me neither.

  Even if comments do not qualify as cyberbullying, a child of this age can be hypersensitive to criticism
and, like a teenager, tend to focus on the cutting remark rather than the compliment.

  I have been involved in two cyberbullying prevention campaigns for the EU’s Safer Internet Day. In each case I’ve tried to employ some creative thinking and social science theories to come up with solutions. Because, it seems to me, all the money, time, and prevention campaigns for cyberbullying haven’t really curtailed the incidences of abuse, either because they aren’t working or because the number of kids going online without proper guidance is growing so exponentially that no program can possibly keep up.

  For the first campaign, I used the bystander effect in psychology to raise awareness of cyberbullying among schoolchildren. The bystander effect refers to a crime that took place in New York City in the 1960s, when a young woman, Kitty Genovese, was being stabbed to death and cried out on the street for help, but nobody came. After studying this disturbing case, psychologists learned that the greater number of people who witness a crime or emergency, the less likely any of them will feel responsible to respond. There’s another term in psychology for this phenomenon. It is called diffusion of responsibility. The tenet is similar. When part of a large group, everybody thinks that someone else will act.

  Think how this may work online. In the case of cyberbullying, hundreds of “friends” can witness bullying or harsh criticism online but don’t step up and do anything. It’s actually possible that the more friends you have, the less likely somebody will intervene. For this campaign, the motto that I created was: “Don’t Be a Bystander—Stand Up and Do Something.” In other words, don’t wait for one or two of the other three hundred people to do something. It’s up to users to create a better environment.

  The second campaign was called “Be a Cyber Pal,” a bullying prevention initiative for Safer Internet Day in 2014. Again, I employed a sound theoretical approach from psychology, the theory of planned behavior (TPB). This means that the more you mention something, the more you normalize it. So my initiative was to do an anti-cyberbullying campaign without mentioning the actual word cyberbullying. I can’t help but think that the more we mention it, the more we normalize it, and therefore may increase the probability and expectation of kids being cyberbullied.

  “Be a Cyber Pal,” conceived as an antidote to cyberbullying, was about actively being a kind, considerate, supportive, and loyal friend. And it is cause for hope that it became the most downloaded poster of the campaign that year. I think the positive message—rather than repeating another scary cyberbullying story—gave teachers and families something that’s easier to talk about.

  More recently, hoping to apply solid science to other solutions, I have been working on a mathematical formula to predict the prevalence of antisocial behavior online—in hopes of designing an algorithm to identify incidences of bullying. How?

  Locard’s exchange principle is the basic premise of forensic science. As I mentioned in the prologue of this book, it dictates that every contact leaves a trace, and nowhere is this more true than online. Unlike the playground, where the mean words of a bully disappear instantly into the ether—unless there is an eyewitness—online it is just the opposite. Cyberbullying is nothing but evidence: a permanent digital record. So how did we get to the point where it became more problematic than real-world bullying? My answer is taken from The Usual Suspects, one of my favorite movies, in which Kevin Spacey delivers the immortal line “The greatest trick the devil ever pulled was convincing the world he didn’t exist.”

  To me, the greatest trick social media and telecom companies ever pulled is trying to convince us that they can do nothing about cyberbullying.

  In terms of digital forensics, it is a cybercrime with big fingerprints. Using an approach that I am calling the math of cyberbullying, both victims and perpetrators can be identified.

  Many of the big-data “social analytics” outfits like Brandwatch, SocialBro, or Nielsen Social use algorithms to identify or estimate much more complicated things, like a Twitter user’s age, sex, political leanings, and education level. How hard would it be to create an algorithm to identify antisocial behavior, bullying, or harassment online? My equation goes like this: d × c (i × f) = cyberbullying.

  The math would be this simple:

  I am bullying you = direction (d)

  bitch, hate, die = content (c)

  interval (i) and frequency (f) = escalation

  I am actively working with a tech company in Palo Alto to apply the Aiken algorithm to online communication. To develop the c (content) database, I plan on launching a nationwide call for content. Every person who has ever received a hateful bullying message can forward it to our repository. In that way, victims of cyberbullying can become an empowering part of the solution to an ugly but eminently solvable big-data problem. We just need the collective will to address it.

  The algorithm can be set to automatically detect escalation in a cyberbullying sequence, and a digital outreach can be sent to the victim: “You need to ask for help. You are being bullied.” And simultaneously an alert can be sent to parents or guardians telling them something is wrong and encouraging them to talk to their child.

  The beauty of the design is twofold—first, only artificial intelligence would be screening the transactions, which will be incredibly efficient for a big-data problem such as cyberbullying, and second, there would be no breach of privacy for the child. Parents wouldn’t need to see the content, only be alerted when there appeared to be a problem. I know there could be an outcry about surveillance, but we are talking about minors and we are talking about an opt-in solution with parental consent.

  Ultimately the algorithm could reflect jurisdictional law in the area of cyber-harassment against a minor and be designed to quantify and provide evidence of a crime. One day, it could involve sending digital deterrents to the cyberbully, which is a way to counter what cyberpsychologists call “minimization of status and authority online.” We can show young people that there are consequences to their behavior in cyberspace.

  It’s a twenty-first-century solution to a twenty-first-century problem.

  Nastiness online is becoming an accepted reality—and something most people have witnessed. The majority of adult social media users said they “have seen people being mean and cruel to others on social-network sites,” according to a report from the Pew Research Center’s Internet and American Life Project. The conditions of the cyber environment can make cruelty a competitive sport—and posts escalate from barbs to sadism very quickly. Envy drives some of this activity. Celebrities are often targets. It took Monica Lewinsky, one of the early social media victims, a decade to emerge from her experience of being shamed and humiliated. Zelda Williams, the twenty-five-year-old daughter of actor Robin Williams, gave up her Twitter account after she was exposed to unimaginably awful tweets following her father’s death.

  The same year, when American baseball legend Curt Schilling tweeted his paternal pride that his daughter Gabby had received a college acceptance letter, the celebratory mood devolved into ugliness when Twitter “trolls” engaged in sexually explicit posting about Gabby, who was seventeen years old. Schilling did what probably millions of other fathers can only dream about: He used his fame and popular blog to track down nine of the individuals who had generated the hateful and sexist comments and got them fired from their jobs or sports teams.

  If young adults can be so devastated by online attacks—then what about children?

  “Trolls” are malicious individuals who search online for unsuspecting people to deceive and trick. Sadistically teasing and taunting children and tweens is a sick sport for them. One common place where they meet up with kids as young as six years old is on Internet gaming sites, where groups use webcams and microphones to communicate while they meet one another online and play. They can be found on popular multiplayer online games like Grand Theft Auto (affectionately known as GTA), which they play in hopes of winning the trust of young unsuspecting players in order to trick them, usually making the
m freak out—while recording their conversations and posting them for kicks. This is damaging for children on so many levels, not to mention that it brings them into contact with these pathological strangers who are manipulating and preying on their innocence—for laughs.

  The Elephant in the Cyber Room

  But let’s dig a little further into the comprehensive EU study. Children were asked if they had been bothered or upset by anything they’d seen online—or knew about things that bothered their friends.

  Bothered was used to describe something that “made you feel uncomfortable, upset, or feel that you shouldn’t have seen it.” The children were asked to describe in their own words what bothered them.

  Nearly ten thousand responses came in. They were diverse and wide-ranging, and changed considerably with the age of the child. Younger children were more concerned about content, such as something they’d seen that was meant for adults. Older children were more worried about conduct and contact, in other words, troubled by something they’d done or witnessed being done online. They worried about how they were supposed to act (conduct) and about people they might meet (contact) online.

  The researchers compiled the responses and organized them into types of concerns. Girls tended to be more concerned about strangers they had met online, or might meet. Boys were more bothered by violence they’d seen. Both boys and girls described being bothered by things they’d seen on video-sharing websites like YouTube—violent or sexual images, as well as other inappropriate content. Both boys and girls described being bothered by real violence, as well as gory, cruel, and aggressive fictional violence that they’d seen in other places online—particularly violence against animals or other children.

  Okay, let’s stop there for a second. Children watching violence against children? Yes, you read that correctly. Almost anything can go up online, on any forum or site where video hosting is enabled. Some sites do monitor content, but there is often a latency period, or window in which unmonitored content can be seen by anyone before it is taken down.

 

‹ Prev