Worried About the Wrong Things

Home > Other > Worried About the Wrong Things > Page 13
Worried About the Wrong Things Page 13

by Jacqueline Ryan Vickery


  The panics discussed in this chapter demonstrate that risks become visible only when they threaten otherwise protected and privileged young people, who do not fit the stereotypical image of “at-risk” populations. When privileged young people are perceived to be threated, their stories gain attention and concern. This is evident when parents and teachers speak to what a “good kid” a victim or an offender is, or how they are “surprised this could happen to their child” (Thiel-Stern 2014). News coverage emphasized how Megan Meier’s parents did “everything right.” Such rhetoric is largely absent when “at-risk” youth are subjected to harm; instead they are likely to be blamed or held responsible for any harm they encounter. The policies that have been explored in this chapter demonstrate how online risk and harm have been shaped by middle-class understandings of protection and innocence and how technology and youth become discursive sites for governmental intervention and control.

  Both of these implications—drawing attention to sensationalized harms and protecting childhood innocence—lead to the third takeaway from this chapter’s harm-driven policy analysis: the failure to equip young people with the resources and education to safely navigate risk. There is inherent risk in everything we do. Banning social media will not eliminate the risk of exposure to unwanted pornography, sexual predation, or peer aggression. If our goal is to eliminate risk, we will fail every single time. And even if we could craft a policy that eliminated all these risks by denying young people access to particular content or websites, young people would still lose these regulatory and restrictive protections when they turn 18 and gain the constitutional rights of adults. And how then could we expect them to be prepared for the inevitable risks they will eventually face? Youth is a time of learning and preparing for adulthood, and that means helping young people identify and navigate risks, not avoid them.

  As an alternative approach, opportunity-driven expectations recognize our responsibility as a society to help young people identify and assess risk. Regulations that move beyond minimizing risk to balance and expand opportunities will help young people make decisions about which risks are potentially beneficial and worthwhile and which decisions are not. As will be discussed in the following chapters, an educational approach recognizes and values young people’s desires and experiences by building trusting relationships with adults who, rather than look over their shoulder, “have their back” (to paraphrase from Jenkins 2007). This approach validates the experiences and expertise of educators and school districts to discern the appropriate measure of guidance for their students. Opportunity-driven expectations do not construct a monolithic view of youth, but instead account for variations within different communities, developmental stages, and experiences. Opportunity-driven expectations balance risk by crafting regulations that simultaneously expand opportunities for positive experiences and minimize exposure to harm. Safety should not be polarized as the opposite of risk; rather, a discourse of safety must strive to separate risk from harm and to help young people learn to navigate positive and negative opportunities by respecting their experiences, their values, and their rights.

  The rest of the book examines how discursive understandings of risk and harm-driven expectations shape local policies and practices. These prevailing fears—porn, predators, and peer interactions—dominate public imagination, conversation, resources, and policies. However, what effect does this narrative have on the lived experiences of actual young people? Such a question is particularly challenging to answer when we remember that many policies are predicated on insubstantial claims and assumptions of harm in the first place. Equally as important, what are the other risks and harms that are subsumed by these visible and attention-demanding discourses? To answer these questions, I explore the unintentional consequences of the Children’s Internet Protection Act at Freeway High in order to examine how the policy actually exacerbates some risks. I then address the second question: What else might we be concerned with—that is, what risks are rendered invisible as a result of these media-fueled panics? It is far more attention-grabbing to discuss and worry about porn, predators, bullies, and sexting than it is to concern ourselves with the intensification of social inequities. But we must address those inequities if we want to create a safe and equitable digital world for all young people—a world in which risks are minimized regardless of privilege and opportunities are maximized across all populations.

  Notes

  1. See boyd 2014; Cassell and Cramer 2008; Clark 2012; Finkelhor 2011; Livingstone 2008; Livingstone, Haddon, Gorzig, and Ólafsson 2011; Madden et al. 2013; Watkins 2009.

  2. See appendix B.

  3. ”Obscenity” is difficult to legally define, but is determined by whether the work depicts or describes, in a patently offensive way, sexual conduct or excretory functions specifically defined by applicable state law, and whether the work, taken as a whole, lacks serious literary, artistic, political, or scientific value. “Pornography” is a more limited term; it refers to the erotic content of books, magazines, films, and recordings. Obscenity includes pornography, but may also include nude dancing, sexually oriented commercial telephone messages, and scatological comedy routines. US courts have had difficulty determining what is obscene. This problem has serious implications: if an act or an item is deemed obscene, it is not protected by the First Amendment; however, indecent and erotic materials are granted legal protection.

  4. Free speech includes both the right to speak and the right to have access to speech. Denying access to content via censorship has historically been considered a violation of the First Amendment rights.

  5. Usenet was a network for the discussion of particular topics and the sharing of files via newsgroups. Bulletin-board systems were a pre-World Wide Web form of communication in which users could upload images and files to “bulletin boards” for other users to download and view.

  6. One reason, for example, is that Rimm’s sample only included self-proclaimed “adult” BBSs (which required proof of age and a credit card payment) and a select group of Usenet newsgroups. In addition to a non-representative sample, Hoffman and Novak (1995) argue the study was misleading because Rimm did not disclose how he counted images or classified “pornography” (for example, in a data table he labeled supermodels as pornographic). Post (1995) points out that Usenet groups totaled 11.5 percent of Internet traffic at the time of the study and only 3 percent were associated with newsgroups containing pornographic imagery. He more accurately concluded that less than 0.5 percent (3 percent of 11.5 percent) of messages on the Internet were associated with newsgroups that contained pornography (and many of the messages in these “pornographic” newsgroups were text files that may not accurately be classified as sexually explicit). Although we do not have such data about sexual explicitness in the remaining 88.5 percent of the non-Usenet traffic on the Internet, it is fair to say that only a small percentage of pornographic imagery, relative to non-pornographic content, was available in the Usenet newsgroups, which account for only a small percentage of the overall Internet (Mullin 1996).

  7. For a detailed account of how the Rimm Study fueled a porn panic, see Mike Godwin’s 2003 book Cyber Rights: Defending Free Speech in the Digital Age.

  8. This double standard of allowing images of male nipples but not images of female nipples is still enacted within various social media policies today, including those of Instagram (Kleeman 2015) and Facebook (Esco 2014).

  9. Kuipers (2006) compared Internet regulation in the United States with that in the Netherlands and found that the US was more likely to enforce legal regulations that restricted minors’ access. The Netherlands adopted normative community standards, which were enforced by social norms, by parents, and via educational curriculum and initiatives. This is an example of the ways in which technologies are socially constructed and regulated in different ways. In the US, sexual content is regulated by law, although such regulation often competes with values such as free speech. In the Netherlands, the value of free spe
ech outweighs heavy legal regulation, and protection takes the form of education, parental responsibility, and normative community standards of practice.

  10. The MPAA is an example of an industry regulating itself through a voluntary rating system: films are rated G, PG, PG-13, and R on the basis of content. This is in lieu of government regulation, which is complicated by protections of speech. Similarly, the television industry self-regulates through the technological implementation of the v-chip, which enables parents to block shows they deem objectionable or inappropriate. Both of these are examples of industry self-regulation rather than direct government regulation and laws.

  11. To Catch a Predator was a reality-style television series, hosted by Chris Hansen, that aired as a segment of the program NBC Dateline from 2004 to 2007. It also aired in the United Kingdom, in Australia, in New Zealand, and in Portugal. A spinoff book by Chris Hansen, titled To Catch a Predator: Protecting Your Kids from Online Enemies Already in Your Home, was published in 2007.

  12. Mobile phones in particular allow children a way to get out of threatening situations and allow parents a way to more quickly get in touch with their children if something goes wrong. Other factors leading to a decline in missing children is more aggressive searches for and prosecution and supervision of predators (also enhanced by technology), as well as response systems such as the Amber Alert (also enhanced by the availability of technology). He also notes that technology changes the way kids take risks (more likely online at home than in public); this puts more distance between children and strangers who are looking to harm a child.

  13. This is merely a correlational relationship, and we cannot attribute a decrease in crimes to the Internet. Other social and educational programs aimed at curbing risky behaviors are likely contributors. The point is merely to demonstrate the disproportionate and inverse relationship that as crimes and risky behaviors have decreased, fears about the Internet exacerbating risk have increased.

  14. The Protecting Children Act also addressed continuing concerns about pornography by amending the Communications Act of 1934 to prevent video service providers from offering child pornography.

  15. The coining of the term “cyberbullying” is often credited to Bill Belsey, a Canadian educator and anti-bullying activist who founded the website cyberbullying.ca (Bauman 2011). Belsey claims to have coined the term after moderating a bullying prevention site in which youth specifically discussed online bullying. But according to the Oxford English Dictionary the term was coined by Christopher Bantick in 1998 in an article published in the Canberra Times (Bauman 2011). Research by Sheri Bauman (2011) reveals that the word was first used in 1995 in a New York Times article about cyberaddiction. The word appears to have originated organically and simultaneously in different places, but it was consistently used to describe harassment or bullying that took place in online spaces. The term began generating attention among youth scholars, educators, and within the medical community around 2006, which is when it began to enter public conversation and discourse (ibid.). Hindjua and Patchin (2009) coined the term “cyberbullicide” to refer to cyberbullying that leads to suicide.

  16. For discussions of post-structuralism, power, and language, see Barthes 1972; Eco 1976; Foucault 1970.

  17. Forty-six of the fifty states had passed anti-bullying laws by 2010. The exceptions were Hawaii, Michigan, South Dakota, and Montana (Baumann 2015).

  18. See Tinker v. Des Moines Independent Community School District (1969), which ruled that students’ speech rights could be restricted only if they “substantially interfered with the work of the school or impinged upon the rights of other students.” Bethel School District v. Fraser (1986) found that schools could censor vulgar and offensive language when it “undermined the school’s basic educational mission” (p. 684). Morse v. Frederick (2007) banned a student’s speech at an off-campus event sponsored by the school. The latter “effectively expanded school authority beyond the campus to outside events sanctioned by the school, thereby continuing the post-Tinker trend of limiting student speech rights” (King 2010, p. 869).

  19. Lori Drew was originally convicted of “misuse” of computer technology under the Computer Fraud and Abuse Act. The conviction was later overturned (Steinhauer 2008; Zetter 2009).

  20. For a few examples, see Bluestein and Turner 2012; Stein 2010; Thevenot 2014. This was also the plot of a 2012 Lifetime movie titled Sexting in Suburbia, in which a mom took legal action against her daughter’s school after her daughter killed herself as a result of bullying.

  21. The 2009 Student Internet Safety Act should not be confused with the Internet Safety Act of the same year, which proposed that all ISPs and Wi-Fi providers, public and private, keep records and make them available to the police for at least two years.

  22. Sexting first reached the attention of the general public in 2009, when a scandal in rural Pennsylvania made national news (Searcey 2009) and was the subject of a 2012 Lifetime show titled “My Life is a Lifetime Movie” (Baker 2012). Around that time, CosmoGirl.com and the National Campaign to Prevent Teen and Unplanned Pregnancy conducted a survey of teen sexting that falsely linked sexting to the risk of pregnancy (Sex and Tech 2008).

  23. This case is particularly complex and widespread. For a detailed account and analysis, see The Atlantic’s article “Why Kids Sext” (Rosin 2014).

  24. Via state laws regulating age of sexual consent.

  25. Examples include the invasive procedures welfare recipients are subjected to, stop-and-frisk laws that target people of color, and electronic surveillance and monitoring of former prisoners.

  3

  Access Denied: Information, Knowledge, and Literacy

  It’s weird what sites are blocked and which aren’t when I think about. … They have this whole list of sites that you can’t go to. If you try to go to them it says “Access denied.” Even when they’re useful sites, like tutorials and things you look up for school or need for a project, nope.

  Anna (18 years old, Mexican-American)

  Schools undoubtedly have an obligation and responsibility to protect students and ensure safe learning environments. In compliance with the Children’s Internet Protection Act (see chapter 2), Freeway High enables a firewall that heavily regulates students’ (and most of the time teachers’) access to online content. The firewall blocks access to websites deemed inappropriate or harmful. This includes sexually explicit content (porn, nudity, and so on), but also educational sexual resources (e.g., resources related to contraception and to sexually transmitted infections). Additionally, the firewall blocks both students’ and teachers’ access to all online videos. Sites such as YouTube are blocked entirely, whereas CNN and local news broadcast stations are accessible, but the embedded videos are blocked. The school also blocks students’ access to Facebook, Instagram, Twitter, and other social media platforms. Images on sites are blocked if the filter deems them inappropriate; this includes all nudity even artistic renderings of the human body (e.g., Michelangelo’s David) and human anatomy, even in the context of sex education (with the exception of some pre-approved sites). Other sites are blocked on the basis of violence or derogatory language. All these sites are blocked in compliance with federal laws based on the premise that they reduce exposure to risky or harmful material, but are actually executed at the discretion of commercial firewall services. Filters reveal expectations of risk, and thus I want to step back and analyze the embedded assumptions about what kinds of content are deemed harmful and what kinds of content are not.

  The central debate over technology use at Freeway High is, on the surface, about access—who can access technology when, where, why, how, and for what purposes. On the one hand, the policies appear to be restrictive and to stem from harm-driven expectations; on the other hand, the school’s curriculum stems from opportunity-driven expectations that enhance learning. The students of Freeway High are largely from low-income families, and the school is dependent upon E-rate discounts for telecommunication
services; this means it is required to enable firewalls that block students from accessing information deemed inappropriate. Consequently, the effects of federal policies, such as the Children’s Internet Protection Act, play a role in shaping harm-driven expectations and enacting risk-minimizing practices. Filters, in addition to bans on mobile devices (see chapter 4), work in tandem to construct technology as a risk or threat that must be regulated and controlled. However, alongside prohibitive policies, the school also provides students with opportunities to attain and enhance their technology skills. Freeway High provides several computer labs with up-to-date equipment and software, offers several technology courses, and financially supports after-school digital media clubs (see chapters 5 and 7). Thus, at the other end of the spectrum, we see opportunity-driven expectations via practices that support—and even celebrate—technology as a creative and vocational opportunity for students. In view of the contradictory messages “technology is harmful” and “technology is an opportunity,” it is understandable that students are frustrated by the policies and practices that regulate access to and use of technology at school.

 

‹ Prev