In an attempt to regulate pornography, CDA criminalized the transmission of “obscene or inappropriate” material to anyone under the age of 18. Because a sender cannot know who might have access to information posted online, CDA essentially limited adults’ access to sexual content and criminalized indecent speech, thus limiting adults’ First Amendment rights to freedom of speech. CDA, also referred to as “The Great Cyberporn Panic of 1995” (Godwin 2003), was overturned almost immediately after opponents argued that it would have chilling effects on adults’ right to free speech. It was also argued that CDA infringed upon parental autonomy because it denied parents the right to decide what material was acceptable for their children (ibid.). Also contributing to the overturning of CDA was the fact that the language intended to protect young people from indecency was deemed too broad (Quittner 1996).
Though CDA was overturned, I want to back up and consider the context in which pornography and sexual content was being addressed in order to understand how media, journalism, and politics worked together to produce expectations of harm and to mobilize risk. When CDA was being debated in Congress, mainstream news media were also discussing the risk of pornography and inappropriate online material available to minors. In a 1996 paper presented at a conference of the Librarians Association of the University of California, Santa Barbara, Dorothy Mullin (1996) chronicled the “porn panic” of the early 1990s. She found that journalists and popular news sources reported the ease with which young people could access pornography and sexual content on the Internet. Politicians and journalists drew analogies such as “the internet’s red light district” (ibid.) and confidently proclaimed the prevalence, pervasiveness, and perverseness of minors’ access to pornography on the Internet.
One of the most memorable and influential accounts that fueled the porn panic was a 1995 issue of Time that featured a cover photo of a traumatized boy (white, approximately 10 years old) looking at a computer screen with a look of horror and fear on his face. The headline “Cyberporn” spanned the cover, followed by “Exclusive: A new study shows how pervasive and wild it really is. Can we protect our kids—and free speech?” (Elmer-DeWitt 1995). The Time article made the public aware of the risk of access to pornography, although it did not actually demonstrate the harm of inadvertent access. Risk and harm were conflated as a way to construct technology as threatening and therefore in need of government regulation, essentially mobilizing a discourse of the Internet as risk throughout society.
The Time article relied on a study known as the Rimm Study (Rimm 1995), which incorrectly claimed that 83.5 percent of the content of Usenet (a popular bulletin-board system at the time)5 contained pornographic images and obscene content. Although the claims have been debunked, the inaccurate statistics have nonetheless worked to effectively construct the Internet as a scary and dangerous space for youth. The fallacious study produced harm-driven expectations that continue to be used as the justification of regulations more than two decades later. The study was conducted by a Carnegie Mellon University undergraduate engineering student named Marty Rimm. An article titled “Marketing Pornography on the Information Superhighway” was published in the Georgetown Law Review, a non-peer-reviewed law journal. Since its publication the study has been accused of being misleading at best and has been completely discredited by other researchers as outright unsupported and inaccurate (Cohen and Solomon 1995; Godwin 1998; Hoffman and Novak 1995; Marwick 2008; Post 1995; Rheingold 1995).6 Nonetheless, the debunked statistics from the Rimm Study were repeatedly reported and used to justify CDA regulations and restrictions intended to protect young people.
Drawing from the ways media are implicated in inciting fear and panic, scholars (Godwin 1998; Mullin 1996) largely blamed the porn panic on the rhetoric that was used in the Time article. The article was presented with the following headline and tagline:
ONLINE EROTICA: ON A SCREEN NEAR YOU
IT'S POPULAR, PERVASIVE AND SURPRISINGLY PERVERSE, ACCORDING TO THE FIRST SURVEY OF ONLINE EROTICA. AND THERE'S NO EASY WAY TO STAMP IT OUT
After an eight-sentence introduction about how easy it is to access pornography and erotica offline, the article’s author, Philip Elmer-Dewitt, turned his attention to online erotica. His language both incited and indicated fear: “[S]uddenly the press is on alert, parents and teachers are up in arms, and lawmakers in Washington are rushing to ban the smut from cyberspace with new legislation—sometimes with little regard to either its effectiveness or its constitutionality.” His evidence of this claim? The now-debunked Rimm report, which was to be released later that week. The article contained a bullet-point list describing the findings of the study, which included information about: the pervasiveness of perverse content, details about how men are the dominant consumers, claims that online pornography is a worldwide phenomenon, and a discussion of how much money is made from the sales of these images. Elmer-Dewitt concluded that “the appearance of material like this on a public network accessible to men, women and children around the world raises issues too important to ignore” (p. 40). Right above this statement he explained that access to pornographic bulletin-board systems cost, on average, $10–$30 a month and required a credit card. On those grounds it is reasonable to assume that a majority of consumers were not children but rather consenting legal adults, but nonetheless children were lumped in as consumers. The article went on to discuss the benefits and negative consequences of the Internet: “This is the flip side of Vice President Al Gore's vision of an information superhighway linking every school and library in the land. When the kids are plugged in, will they be exposed to the seamiest sides of human sexuality? Will they fall prey to child molesters hanging out in electronic chat rooms?” (p. 40). Although the Rimm Study was about sexually explicit content (and more so scanned images of already existing pornography), the article lumped predators in with porn, thus further inciting fear and precipitating the need for government regulation.
In making an effort to quell fears, Elmer-Dewitt addressed the difficulties minors faced in accidentally accessing pornography:
According to at least one of those experts—16-year-old David Slifka of Manhattan—the danger of being bombarded with unwanted pictures is greatly exaggerated. “If you don't want them you won't get them” says the veteran Internet surfer. Private adult BBSs require proof of age (usually a driver's license) and are off-limits to minors, and kids have to master some fairly daunting computer science before they can turn so-called binary files on the Usenet into high-resolution color pictures. “The chances of randomly coming across them are unbelievably slim,” says Slifka.
Here we have a potential adolescent victim explaining that the odds of a minor’s chances of inadvertently accessing porn were “unbelievably slim.” What Slifka implied, of course, was that if minors were jumping through hoops and acquiring technical skills and currency to access pornographic content on Usenet, it was deliberate. The panic construed all exposure to porn as harmful and ignored that it can also be intentional, and not necessarily harmful. Although the Time article criticized the Rimm Study (particularly for the fact that the data were taken from self-selected users interested in erotica), presented other similar studies that revealed less shocking findings, addressed the complications of regulating the Internet, and quoted experts and politicians on both sides of the debate, it nonetheless contributed to the rising panic about minors’ access to porn online. Toward the end of the article, Elmer-Dewitt mused “How the Carnegie Mellon report will affect the delicate political balance on the cyberporn debate is anybody's guess.” What unfolded was a continued reliance on the study as evidence of risk and harm.
Despite the limitations of the study, Senator Chuck Grassley (R-Iowa) entered the study into the Congressional Record when he relied on the data and rhetoric as the basis for his Protection of Children from Computer Pornography Act of 1995. Senators James Exon (D-Nebraska) and Slade Gordon (R-Washington) used data from the study when they co-sponsored the CDA bill. The mislead
ing, fear-mongering, and discredited study fueled a porn panic that was taken up by the media and by politicians.7 Fiction was functioning as truth (Walkerdine 1997) in that fallacious studies were used by authority figures and experts to justify policies. Although CDA was overturned, the expected and perceived threat of pornography has continued to shape policies and practices more than twenty years later. On the twentieth anniversary of the publication of the Time article, Elmer-Dewitt wrote an article for Fortune explaining the fallout and how the article shaped his career and fueled a panic. He even disclosed that a Time researcher assigned to his story remembers the study as “one of the more shameful, fear-mongering and unscientific efforts that we [Time] ever gave attention to” (Elmer-DeWitt 2015).
The combination of the Rimm Study, the Time article, and the article’s use as fodder for politicians and “expert” opinion allows us to trace how harm-driven expectations produced a discourse of fear and risk that fueled a moral panic and increased calls for restrictive regulations. Congress’ attempt to regulate the Internet was an example of collective harm management overriding self-responsibilization—particularly because government regulation was not the only mode of protection at this time, as is addressed in the following section.
The Child Online Protection Act
Congress tried again in 1998 to draft a policy that would protect minors from inappropriate material online (again, primarily pornography). The Child Online Protection Act (COPA) was a watered-down version of CDA. Unlike CDA, COPA only attempted to restrict commercial communication and only affected Internet service providers (ISPs) in the United States. Rather than criminalizing the transmission of sexual content to minors, the Act attempted to regulate the private sector by requiring ISPs to restrict minors from accessing sites that contained materials deemed “harmful to minors.” COPA defined “harmful to minors” in a much broader sense than obscenity, but also included material that “appealed to prurient interests” as deemed by “contemporary community standards” (Child Online Protection Act, 1998, section 231). This included all sexual acts and human nudity, including all images of female breasts but not male nipples, which reveals the sexist double standard that renders the female body as categorically and inherently sexual, and thereby explicit.8 Most detrimentally, such a broad definition also blocked minors’ access to educational information, including health and sexual information online, even though access to sexual health education is a valuable part of adolescents’ developing sexual identities.
In contrast, an opportunity-driven approach would recognize that increased access to sexual health information is a risk, but is also an opportunity to promote healthy and safe sexual exploration and development. As will be further addressed in the next chapter, offline sex-education material may be available to some youth, but the Internet provides an accessible way for minors to seek out information in private. This is particularly beneficial for young people whose questions, desires, and sexual orientation may not align with parental and cultural expectations. In addition to educational resources, the Internet also provides a supportive space and community for gay, lesbian, transgender, queer, and questioning youth to explore their sexuality and sexual identities (Gray 2009; Kanuga and Rosenfield 2004; Vickery 2010). Yet COPA framed youth sexuality as inherently harmful and attempted to deny minors access to sexual content.
Interestingly, although COPA insisted upon stricter regulations of private businesses, the congressional findings also recognized that the industry was already attempting to provide parents with ways to protect their children. In other words, the controversy had less to do with whether or not we restricted young people’s access to sexually explicit (and educational) content than with who should be responsible for such regulations, and how. There were other modalities of regulation in place that balanced risk and opportunities, including the market, norms, and technological solutions. The congressional findings included the following statement: “To date, while the industry has developed innovative [technological] ways to help parents and educators restrict material that is harmful to minors through parental control protections and self-regulation, such efforts have not provided a national solution to the problem of minors accessing harmful material on the World Wide Web” (congressional findings, COPA, 1998, section 1402). The report acknowledged technological advances that enabled parents to protect their children, yet the problem was explicitly constructed as a national problem that required a national (read: government) form of regulation. Such language is indicative of the ways in which risk discourse contributed to a panic that deemed parents’ and educators’ self-regulatory behaviors insufficient means of intervention and protection. With COPA we see how moralization discourse called for government intervention and constructed sexual content not as a risk, but as a harm (despite the fact significant harm had not been clearly demonstrated). This is an example of the way harm-driven expectations work to push collective harm management to the frontline of public discourse, rather than self-responsibilization (Hunt 2003; Hier 2008) and education.
COPA was eventually deemed unconstitutional for infringing on the protected speech of adults. It was also criticized for its broad language, which made it difficult to define or enforce. Lowell A. Reed, a US District Court judge, struck down the bill for violating the First and Fifth Amendments and interestingly added “perhaps we do the minors of this country harm if First Amendment protections, which they will with age inherit fully, are chipped away in the name of their protection” (Urbina 2007). Youth in the United States are often denied legal rights they later attain as adults. Often the more productive approaches—stemming from opportunity-driven expectations—are parental regulation, education, and professional guidance that help youth safely (and eventually autonomously) navigate or avoid risks, rather than harm-driven expectations that deny minors legal rights.
Additionally, Judge Reed’s remarks highlight the continued discursive binary of child/adult, which ignores the complexity of “youth” that is neither fully child nor legally adult. Youth occupies a transitional period from childhood to adulthood and highlights the limits of the discursive boundary (Gabriel 2013). At the age of 18 young people are automatically granted the rights of adults. But, as Judge Reed hinted, perhaps young people ought to gradually attain rights and responsibilities—to legally “come of age.” Such an approach would recognize youth as a transitional period between childhood and adulthood, rather than an absolute either/or existence. For that reason, regulation could protect the most susceptible and vulnerable populations (e.g., young children) while granting older youth responsibility and rights.
Looking Beyond the Law
There have been attempts to account for complexity within our legal understandings of protection, risk, and harm; however, the logistics of passing such nuanced regulations are complicated. When Congress initially passed COPA, it also created an eighteen-member committee whose purpose was to “identify methods to reduce minors’ access to harmful material on the internet” (Goldstein 2002, p. 1190). After two years of evaluation the panel recommended that libraries promote public awareness of technological tools available to protect adolescents, that schools and libraries adopt acceptable use policies, and that policies should focus on the design and adoption of curriculum and education intended to protect adolescents. Remarkably absent from the committee’s recommendations was any mention of requiring filtering software in libraries; instead the focus was on education and self-regulation. One member of the committee—Jerry Berman of the Center for Democracy and Technology (a free speech advocacy group)—wrote the following: “Acknowledging the unique, global character of the Internet, the commission concludes that new laws would not only be constitutionally dubious, they would not effectively limit children’s access to inappropriate materials. The Commission instead finds that empowering families to guide their children’s Internet use is the only feasible way to protect children online while preserving First Amendment values” (Statement of COPA commissioner Be
rman, 2002). This approach would have allowed for other modes of regulation (e.g., norms, the market), rather than legal restrictions, to offer protection.9 Additionally, the approach allowed for a more continuum-based construction of youth and empowered adults to implement discretion in protecting young people.
The members of the congressional panel were not the only authorities to suggest less restrictive technological regulations; other experts also suggested less restrictive means of regulation. For example, in the mid 1990s the World Wide Web Consortium (W3C) suggested a model of industry self-regulation similar to the self-regulation implemented by the Motion Picture Association of America (MPAA) and the National Association of Broadcasters.10 The W3C recommended that a voluntary rating system that would enable parents to block certain content be embedded within the HTML protocol. The Internet Engineering Task Force was also working on embedding a ratings system into web addresses (Abernathy 1995). This would have allowed the industry to empower parents, schools, and libraries to filter content on the basis of self-regulation and localized community standards rather than overarching government intervention. It also would have also enabled schools to implement filtering systems that would account for increasing maturity and responsibility. Elementary schools could block out material deemed inappropriate for young children; high schools could choose to allow more mature content, such as sex-education material or nude art. However, when Congress once again attempted to regulate indecency, this time via the 2000 Children’s Internet Protection Act (CIPA), paternalistic restrictive regulation continued to take precedence over education and parental guidance.
The Children’s Internet Protection Act
Worried About the Wrong Things Page 9