Worried About the Wrong Things

Home > Other > Worried About the Wrong Things > Page 8
Worried About the Wrong Things Page 8

by Jacqueline Ryan Vickery


  Notes

  1. For fuller discussions, see the introduction to Rainie and Wellman 2012, chapter 1 of McChesney 2013, and Marvin 1988.

  2. Mods and rockers were two opposing youth subcultures in Britain in the 1960s whose identities were structured around musical genres. Rockers favored rock ’n’ roll; mods favored soul, R&B, or ska.

  3. The notion that a generation can be defined by its use of technology has been criticized for being technologically deterministic. See Buckingham 2006.

  2

  Policies of Panic: Porn, Predators, and Peers

  Something about the combination of sex and computers, however, seems to make otherwise worldly-wise adults a little crazy.

  Philip Elmer-Dewitt, Time, July 3, 1995

  [The girls] just fell into this category where they victimized themselves.

  Major Donald Lowe, investigator in the Louisa County High Instagram sexting scandal, November 2014 (source: Rosin 2014)

  The rise over concerns about young people’s use of digital media has led to public pressure for “somebody to do something.” Often that “somebody” comes in the form of formal governmental interventions, such as policies and public campaigns aimed at controlling risks. As will be demonstrated, regulating technology and young people’s use of technology is not as straightforward as it may at first appear. Even when there is consensus over what constitutes harm such as online predators (who universally threaten normative understandings of young people’s innocence and are thus a seemingly easy example of something that young people need to be protected from), the mechanisms through which we protect young people are inextricably linked to other competing values that make regulation difficult. For example, regulations that deny minors access to computers or websites must be balanced with rights of privacy and freedom of speech. For that reason, regulation is fundamentally complicated and often controversial. Through an analysis of various attempts at regulating young people’s use of digital media in the United States, we are able to more fully investigate expectations of both youth and technology and examine how constructions of risk are mobilized. Even when harm is universally agreed upon (e.g., when there is agreement that online predators are dangerous), the ways in which we attempt to intervene are value-laden and make visible our assumptions and expectations of young people and risk.

  Discourses of risk, youth, and technology are so deeply embedded within our collective imagination that it can be difficult to unpack the assumptions and expectations that produce such concerns. Because discourses are often visible in their effects but often are invisible in their constructions, it is imperative to examine moral-panic discourses alongside their effects. Risks are often so taken for granted that it can be difficult to understand how they are constructed, enacted, and mobilized throughout culture, history, and society. “Moral panics,” Lumby and Funnell write (2011, p. 280), “constitute an intense site of debate about ideas that are grounded in belief systems and that are connected to embodied and visceral ways of knowing and to ideological systems of meaning.” Federal and state intervention strategies, in the form of policies, offer a visible response to moral panics that allow us to examine how constructions of risk and expectations of youth and technology are articulated, enacted, and legislated.

  The goal of this chapter is not to offer a comprehensive and exhaustive account of all attempted and actualized policies aimed at regulating young people’s use of technology. Nor is it to deeply analyze the moral panics and actualized risks about young people’s media use, as there already exists great empirical research about risk, youth, and media.1 As was noted in the introduction, this book aims to shift our focus away from the loud prominent risks dominating media attention; however, such a move must to be contextualized within the broader mediascape of dominant panics and concerns, which is what this chapter aims to do. Rather than chronicling all the panics in detail, the goal here is to use government-sanctioned policies as an entry point for examining how risk and anxiety are mobilized in society and, accordingly, how they shape expectations of youth and technology.

  In her justification for studying sexting discourse through an analysis of policies, Amy Adele Hasinoff explains (2015, pp. 166–167).: “Law and policy texts position themselves as authorized by the institution of democracy. While political rhetoric usually advocates a position, it is designed to and often claims that it represents public interest and opinions even while attempting to persuade. Policy makers routinely rely on anecdotes and statistics to make their arguments, but there are no norms or standards that prevent gathering evidence from dubious sources.” Relying on unsubstantiated claims and statistics are common characteristics of many of the policies I address, which demonstrates how data are sometimes used to represent “public interest” even when the studies are not sound. Policies are certainly not the only area we could consider in order to investigate how discursive constructions of risk are mobilized, but they do offer a productive site of analysis because they are highly controversial. The controversy aims to appeal to mainstream public assumptions about youth and tend to generate a lot of media and public attention. State-sanctioned regulations require policy makers, advocates, and opponents to articulate their competing viewpoints, which are fruitful sites of analysis. Further, if passed, policies have a measurable and visible impact on the day-to-day lives of youth. In sum, policies are an appropriate space of risk discourse analysis because they reflect and inform expectations of youth and technology.

  Since the 1990s there have been three substantial waves that reflect policies of panic regarding young people and digital media technologies. I refer to them as the porn panic, the predator panic, and peer fear. The three waves are not mutually exclusive and at times overlap; each offers a categorical organization for examining how media and lawmakers respond to the perceived risks associated with adolescents’ online practices. The porn panic refers to the fear that young people will be inadvertently “bombarded” with perverse pornographic content online, and more broadly a fear about minors’ access to inappropriate sexual content in general. The predator panic is similar in that the fear centers on concerns about sex(uality), specifically inappropriate contact between minors and adults who are “lurking online” for young unsuspecting victims. Peer fear complicates discourses of youth and risk by focusing on harm associated with inappropriate behaviors among and between peers (rather than adults and adult content). Specifically I examine the peer fears about cyberbullying and sexting, which are distinct but which overlap in some instances. With peer fear, youth become a complicated site of discursive tension because there are no clear victims or perpetrators—an individual can simultaneously be both—and thus young people themselves are concurrently considered to be at risk and at fault.

  Policies aimed at protecting young people’s digital media use and practices reflect harm-driven expectations and privileged perspectives of risk and harm. They construct young people in paternalistic and narrow ways that do not account for young people’s agency, discretion, consent, and contextualized practices and desires, but rather rely on overly restrictive and protectionist policies and constructions of minors as vulnerable. Despite the fact that many of the studies and texts which contributed to the panics have since been debunked, the threat of risk continues to discursively construct the Internet as a dangerous space for young people. From Foucault’s perspective,2 research that identifies a population (in this case minors) as being “at risk” renders the population governable. The label “at risk” is used to justify control and intervention, often in the form of policies. The act of naming a population as being at risk and constructing the Internet as risky shapes discourse and expectations, which in turn implicates practice. In other words, if the Internet is constructed as a dangerous space, then young people are positioned at risk, which positions policy as a necessary intervention. This is not to deny the existence of potential harms associated with young people’s online practices; however, it is to say that policies are cons
tructed on the premise that risks should be entirely avoided. Foucault argues:

  Truth isn’t outside power. … It is produced only by virtue of multiple forms of constraint. And it induces regular effects of power. Each society has its regime of truth, its “generalized politics” of truth; that is, the types of discourse which it accepts and makes function as truth, the mechanisms and instances which enable one to distinguish true and false statements, the means by which each is sanctioned … the status of those who are charged with saying what counts as true. (1980, p. 131)

  Within policies of panic, the accepted “truth” is that the Internet is inherently dangerous for youth and that safety must be upheld above all other values. Young people—and by extension their families—are tasked with the burden of enacting risk-avoidance strategies.

  Policy interventions attempt to reduce the risks young people may encounter online, but they also aim to reduce adult anxiety. Jackson and Scott (1999, p. 86) assert that “risk anxiety helps construct childhood and maintain its boundaries.” In part this is because anxiety results from the continual historical perception of young people as innocent and in need of (adult) protection (Kincaid 1992). Scott, Jackson and Backett-Milburn (2003, p. 700) write that “the social world of children is divided into safe and dangerous places which has consequences for children’s use of space, where they are allowed to go and the places they themselves feel safe in, frightened, or excited by.” Harm-driven expectations continually construct the Internet as a dangerous space and effectively construe all risk as harmful. Rather than enabling young people and empowering adults to help youth navigate risks, they attempt to prevent risky encounters and behaviors altogether—often with unequal consequences for different youth populations. Further, policies contribute to monolithic constructions of minors that fail to account for developmental, cultural, and emotional differentiations. In the remainder of the chapter, through an analysis of a six federal policies and a few state policies, I examine the harm-driven expectations and consequences of privileged constructions of risk and youth.

  Regulation Is Tricky

  There are various ways in which we as a society aim to regulate young people’s use of digital media. Some parents actively monitor how much screen time their children can have every day; other parents choose to put filters on the home router that block objectionable material; still others deny their children access to computers without direct parental supervision, or may require their children to earn screen time through good grades, chores, and other behaviors. In all these examples, parents are relying on different modes of regulation as a way to monitor their children’s behaviors and practices directly and indirectly. As was noted in the introduction, Lessig (2006, p. 124) categorizes the four constraints that function as modalities of regulation as: architecture (or “code” in digital spaces), the market, norms, and law: “The constraints are distinct, yet they are plainly interdependent. Each can support or oppose the others. … Norms constrain through the stigma that a community imposes; markets constrain through the price that they exact; architectures constrain through the physical burdens they impose, and law constrains through the punishment it threatens.” All these variables—law, norms, market, and architecture—regulate behavior in different spaces and at different times, but one factor could present a greater regulatory constraint on a behavior than another factor. Take smoking as an example again. Minors’ ability to smoke is most strictly enforced via laws, whereas adults’ smoking practices may be more notably regulated via social norms (e.g., whether or not their friends smoke) or concerns about their health. All these variables are always already interacting, and at times certain constraints regulate more directly, intently, or transparently than others.

  Because it is inherently difficult to directly regulate young people—due to parental autonomy over raising their children, as well as the difficulties in regulating private businesses such as Internet service providers—public spaces become a way to indirectly regulate media and technology. Public schools and libraries are central spaces for risk interventions and regulations. In the United States there is a long history of the federal government’s mandating specific protections and educational initiatives to protect minors from real and imagined harms, including the sales and availability of tobacco and alcohol products, exposure to and the effects of advertising, data collection, obesity, and exposure to sexual and violent content. In view of this history, it is no surprise that the government would intervene in managing young people’s digital media practices as a way to protect minors from potential harms. In other words, the fact that the government is regulating practices is of little interest in and of itself. What is worth considering is how risks are produced in the first place. Understanding how risks are constructed allows us to interpret regulations from a value-laden perspective, rather than as a neutral intervening strategy of protection.

  The Porn Panic

  One of the earliest and still ongoing concerns about young people’s online experiences focuses on access to pornography and sexually explicit content. Obviously there are potentially negative and detrimental consequences of exposing young people to graphic sexual content before they are emotionally mature enough to process and understand what they are experiencing. The harmful effects of pornography are inconclusive (Bryant 2010; Owens et al. 2012; President’s Commission on Obscenity and Pornography 1970), yet society’s continued focus on protecting young people from presumably harmful pornographic material reveals the normative expectations of childhood innocence. Harm-driven expectations clearly propel regulatory conversations related to pornography and sexual content.

  Although it may seem simple enough to pass regulations that decrease the likelihood of young people coming into contact with online pornography, the ins and outs of such regulations are much more complicated. For one thing, filters that block all sexually explicit material infringe upon adults’ rights to freedom of speech and autonomy of choice, and lead to complicated debates about censorship and morality (Godwin 2003; President’s Commission … 1970). Second, while most would agree that young people of a certain age should be barred from exposure to pornographic images (i.e., expectations that porn harms innocence), there is little consensus as to what that “certain age” should be. As will be demonstrated, conversations about porn and sexual content (including information about sexuality and sexual health) rely on constructions of childhood as a naturally (i.e., biologically) innocent developmental stage (Gabriel 2013) and presume that all exposure to sexual content will threaten innocence and result in harm. Third, what actually constitutes porn is elusive. “Definitions of ‘pornography,’” Attwood writes (2002, pp. 94–95), “produce rather than discover porn texts and, in fact, often reveal less about those texts than they do about fears of their audiences’ susceptibility to be aroused, corrupted or depraved.” Attwood argues that the indistinct definition of porn—which has been applied to Pompeian frescoes, to Shakespeare texts, and to a variety of erotic media (Kendrick 1987)—leads to confusion and regulatory challenges. Many policies that aim to regulate or prohibit access to sexual content fail to acknowledge young people’s deliberate and healthy desire for information, education, and understandings of their own emerging sexuality. Even if we as a society can come to an agreement about how to define porn and agree that is a threat to young people, the regulations are controversial and problematic when we try to put boundaries around rights of access.

  In the sections that follow, I analyze three federal policies aimed at regulating pornography, but more broadly at sexual content writ large: the Communications Decency Act (1996), the Child Online Protection Act (1998), and the Children’s Internet Protection Act (2000). Like other media scholars (Mazzarella and Pecora 2007; Thiel-Stern 2014), I do not analyze the policies in isolation; I also take into account how journalism and media construct youth sexuality. Mediated discourses are important to consider because they have the power to name—and thus produce—risks; they work alongside policy
to shape public opinions and expectations about youth and technology.

  The Communications Decency Act

  In 1996, Congress passed the Communications Decency Act (CDA), which was an attempt to regulate sexually explicit material; the most controversial and relevant section addressed indecency on the Internet. At that time, the Federal Communication Commission (FCC) already regulated indecent content on television and radio, but the Internet had not previously been affected by indecency policies in the United States. After the National Science Foundation Act opened up the Internet for commercial use in 1992, there was rising concern about the accessibility of inappropriate material, namely pornography. Arguably, the anxiety about the availability of sexual content was widespread, but, as will be demonstrated, minors became the locus of concern because they were easier to regulate and control than legal consenting adults. In order to understand the challenges of regulating sexual content, it is important to distinguish between obscenity and indecency from a legal perspective. Whereas obscenity3 is not granted protections of free speech under the First Amendment, indecent and erotic material is more subjective and open to interpretation. Historically, indecent speech has received First Amendment protection in the United States. Pornographic material includes both obscene and indecent content, thus making regulation difficult since the US government cannot ban or censor non-obscene material.4

 

‹ Prev