Book Read Free

In a Different Key

Page 46

by John Donvan


  True, Rollens lacked any statistical evidence of an increase, and he knew that; that’s why he needed the DDS’s numbers. But he could see with his own eyes that “busloads of kids” were showing up at the same schools and centers where his son Russell had started getting help several years earlier with far fewer children by his side. Meanwhile, through one of the organizations Rollens helped start—Families for Early Autism Treatment—parents he’d never met were reaching out to the organization via its new Internet newsletter. He seemed to be hearing a new diagnosis story every day.

  At this time, the DSM still stated the standard assumption that autism occurred at the rate of 4 to 5 people in 10,000, which was a number derived largely from Victor Lotter’s survey of a single British county more than thirty years earlier. Few follow-up studies on autism prevalence had been conducted in the interim, largely because the research community saw no need for one.

  Rollens wanted the California records to fill this gap. As a tabulation of every person receiving services during the period under review—with autism diagnoses confirmed in each case by professionals—Rollens was confident that it would draw the trend line upward year by year and prove his theory right.

  The DDS report was completed in the spring of 1999, and Rollens used contacts from his former life in the senate to get his hands on an early copy. What he read startled him. Between 1987 and 1998, according to the report’s summary, there had been a 273 percent increase in the number of people availing themselves of state-provided services for autism. By “autism,” the report meant only people with the “classic” sort of autism as spelled out in the DSM. As the state did not provide services to people diagnosed with Asperger’s disorder or Pervasive Developmental Disorder Not Otherwise Specified, those conditions were not included in the report’s count. Instead of 4.5 per 10,000 people with autism, the numbers of those getting services in California came closer to 60 per 10,000. The trend was far sharper than even he had suspected.

  Rollens immediately leaked the report to the Los Angeles Times, which ran a story under what, to Rollens, was the perfect headline: “State Study Finds Sharp Rise in Autism Rate.” Both the body of the piece and one of the sub-headlines used the word “epidemic.”

  The “California Study,” as it came to be called, lit a match under a fire that did not go out for the next ten years, as the word “epidemic” became attached to autism with such frequency and with such force that the quotation marks around it soon fell away.

  —

  THE DISTURBING SPECTER of an autism epidemic became, by reluctant popular consensus, yet another one of the psychological stressors of the twenty-first century—another reason the world was a dangerous place for bringing up kids. Child magazine would capture this anxiety perfectly when it labeled autism the “Disorder That’s Defining an Era.”

  The trope was seized on by advocacy groups of all stripes, as each recognized how much more persuasive their appeals for funding would be when autism could be framed as a terrifying national crisis. Leaders of these groups began citing the epidemic argument in every speech and press release, hammering home the urgency of their cause with two statistics: what the autism rate used to be and what it is right now. Different groups used different numbers, but no matter what, the statistics were all alarming.

  The news media ran hard with the epidemic story. Time made it a cover story in 2002: “Inside the World of Autism: More Than One Million Americans May Have It, and the Number of New Cases Is Exploding.” NBC News partnered with Newsweek in 2005 to produce a weeklong series of programs under the rubric “AUTISM: The Hidden Epidemic?”

  Around this same time, in response to the public uproar, Congress held a series of hearings led by Republican representative Dan Burton. Burton took as his starting point the assumption that the nation was facing an epidemic that demanded investigation. “We have an epidemic on our hands,” he declared in 2002, “and we in Congress need to make sure that the NIH and the CDC treat this condition like an epidemic.” In coming elections, even candidates for the White House would be expected to have formulated positions on the epidemic question. Most took its reality as a given.

  Corroboration for this came in the series of statistics reported throughout the decade by the Centers for Disease Control and Prevention. In 2004, the CDC published an alert for pediatricians reporting that autism in the United States affected 1 in 166 children. In 2007, the CDC announced a new number: 1 in 150. Two years later, it was 1 in 110. The same trend line that California had unearthed back in 1999 was now, according to the CDC, a nationwide concern.

  It was clear—the numbers proved it. This thing was getting bigger all the time. Everyone had reason to be scared.

  —

  BUT AS ALWAYS, there were plenty of ways to count autism, and not all of them added up to the picture of an epidemic that Rick Rollens, Dan Burton, and the advocacy organizations projected. Indeed, it was the signal success of all those players that they kept the politicians, the public, and the media talking about an epidemic when, in the view of most social scientists who looked into the matter, the statistical case for any massive increase in the incidence of autism was highly dubious. Many of the experts suggested that the epidemic that had made everyone so scared actually might not exist at all.

  The experts’ misgivings began with the California numbers brought to light thanks to Rick Rollens’s persistence. Those numbers seemed to tell an open-and-shut story: more children receiving state services for autism meant that autism was on the rise in the population. But that formulation assumed that calculating demand for services was the same thing as counting, one by one, all the kids in California with the condition. Not only were those not the same, but also, in practice, demand for services would never be a trustworthy yardstick for measuring the pace of autism’s spread in the population.

  For example, demand for anything—from drivers’ licenses to public playgrounds—could be expected to go up when a population increase is under way. Indeed, during the years covered by California’s autism report, the population did get significantly bigger, by roughly 16 percent. That was a factor that an epidemiologist would need to take into account and possibly subtract from any overall trend in whatever he was measuring. It is not a complicated adjustment to make, and indeed, the team that produced the California numbers purposefully did not count children who had moved into the state during the years they were examining.

  But there were many other confounding factors that were not so easily corrected for, and none was more nettlesome than the lack of clarity about who should be counted in the first place. It was the same old problem that had hampered the original prevalence study—the one conducted by Victor Lotter in Britain in the mid-1960s, where he had confronted a lack of clear criteria for the condition he was trying to track.

  Thirty-five years later, when the epidemic alarm revived efforts to figure out “true” prevalence, those efforts ran into the same kinds of obstacles Lotter had faced. Arguably, the problems were even worse in the late 1990s, given how frequently the DSM kept moving the goalposts for autism. People diagnosed with autism using the DSM in 1997, for example, might not have qualified for it using an earlier DSM in 1990 and vice versa.

  Studies conducted later in the new millennium demonstrated that such outcomes were highly possible. Researchers in 2012, for example, revisited a set of excellent data that had been collected in Utah in the 1980s. UCLA’s Ed Ritvo had spent four years—1982 to 1986—attempting to identify every single person in the state of Utah between three and twenty-five years of age who possibly had autism, whether officially diagnosed or not. A total of 379 individuals were located and examined, of whom 241 were deemed to have the condition. That left 138 who, though unusual in their behaviors, fell short of the diagnosis based on Ritvo’s criteria, which he lifted from the 1980 edition of the DSM.

  More than twenty-five years later, the same data was run again. Like Ritvo before them, the researchers wanted to identify autism in
the exact same group of 379 individuals—but they wanted to see the effect of using a more up-to-date definition of autism. Fortunately for them, Ritvo had kept excellent records, which he readily shared. But in place of his criteria, the younger researchers substituted the autism checklist that arrived in the 2000 edition of the DSM, which had been through three revisions in the intervening two decades. When they did this, the results were striking. Suddenly, the group’s autism “prevalence” shot up. The newer criteria had qualified an additional 64 individuals for the diagnosis, all of whom had fallen short of it in the 1980s. As a result, the portion of the original group that was considered to have autism was now roughly 25 percent bigger than a quarter century earlier. Clearly, this “increase” could never be used to claim that there had been a dramatic rise in the “true” prevalence of autism in Utah. Objectively, nothing had changed. Nothing, that is, except for the definition of autism.

  These internal changes to the DSM, and the complications they created for studies of autism’s prevalence, were the parts of the story that social scientists proved remarkably ineffective at explaining to the public. They made references to “loosened criteria,” or a “broader autism phenotype,” but such language was not nearly as evocative as the word “epidemic.”

  The same was true when they suggested that one of the factors adding to the impression of more autism than before was what they called increased surveillance and reporting. This referred to the possibility that more autism was being reported because more people were on the lookout for it. Such patterns were a recognized phenomenon throughout medical history. Prevalence rates for chlamydia or gonorrhea, for example, generally rose in places where screening programs for those diseases were put in place. “Surveillance,” however, was not a word that conveyed this idea clearly.

  Neither did the word “reporting” communicate how powerfully prevalence could be affected by any particular authority’s approach to collecting information. This became apparent in the early 1990s when, thanks to parental pressure, Congress began requiring states to specify the numbers for children with autism who were enrolled in special-education programs under the terms of the Individuals with Disabilities Education Act. Many of the children about to be counted were already in school by then, and some had been for several years. But previously, they had been lumped together into the category of “other health impairments,” or perhaps other categories such as “mentally retarded” or “learning disabled.”

  Starting with the 1992–93 academic year, however, schools had to begin reviewing these children’s evaluations to see if they should be moved into the new autism category. It took some years for each school district in each state to get this system up and running as local schools worked through their case files. The result was that, for several years running, state after state sent in numbers that, by themselves, leapt almost straight upward—which is what happens when any count starts from zero.

  Illinois, for example, reported only 5 children receiving services under the autism category in the 1992–93 school year, the first year it was required to start looking for them, but in 2002–03, the number had reached 5,800. One advocacy group employed this statistic under the heading “Autism Increases DRAMATICALLY” in its online newsletter. It also did the math for its readers, revealing that the prevalence for autism in Illinois had increased by an astounding 101,500 percent in a decade. This outshone even the numbers Congressman Dan Burton cited when he chaired a 2000 House hearing. “Florida has reported a 571 percent increase in autism,” he reported. “Maryland has reported a 513 percent increase between 1993 and 1998.” He also mentioned the original 273 percent number from the California report.

  This alarming string of numbers pointed to another odd aspect of the claimed national epidemic: its pace varied wildly from state to state. Even when states were next door to each other, their reported rates of autism prevalence were sometimes not even close. Thus, in 2002, according to federal data, Alabama had a prevalence rate of roughly 3 kids per 10,000, while Georgia’s was more than twice that. Not much separated these two states physically—just a river that does not even run the full length of their shared border. Minnesota and Iowa are merely separated by a straight line on the map, but in 2012, the reported prevalence rate in Minnesota was ten times higher than in its next-door neighbor.

  Social scientists had a good idea why this was happening. The information that was being pored over had been gathered by educational authorities, not public health agencies. The Individuals with Disabilities Education Act had provided a standard definition of autism but had left it to each state’s Department of Education to create its own criteria for determining eligibility for special-education services.

  Each authority built its own checklist, which ranged from as few as five items long to as many as seventeen. Some states required strict adherence to some version of the DSM criteria, others to the IDEA definition, and several to both. Some required diagnosis by a board-eligible psychiatrist or licensed clinical psychologist, but others did not. In some cases, the decision to provide services—which was not at all the same thing as a clinical diagnosis—was left to a group including the parents, school principal, and special-education teachers. All these disparities led to researchers dealing with data that was anything but uniformly derived.

  Moreover, rather than revealing “true” prevalence, these numbers represented what social scientists called “administrative” prevalence. Counting autism by counting the people receiving services was like counting vegetarians on an airplane by adding up orders for meat-free meals. Just as there would be all manner of ways to miss the true “prevalence” in that scenario, administrative prevalence of autism was subject to various distorting influences. These included simple clerical or arithmetic errors, as well as the inherent subjectivity of a diagnosis based on the observation of behavior.

  Even when the same criteria were being referenced, autism was still a diagnosis determined by a nonobjective measure—the opinion of whoever was asked to do the evaluation. Research showed clear geographic and socioeconomic trends in this regard. Diagnoses were more likely in communities that offered more services overall, and they were more commonly given to white and more affluent Americans than to members of ethnic minorities or children from poor families. It was also possible for a child who was denied a diagnosis by one professional to receive it from another. Indeed, in some areas, parents shared lists of diagnosis-friendly evaluators who could be counted on to give an autism label to a child whose symptoms might be borderline.

  Parents had a strong motive for such diagnosis shopping: thanks to their years of lobbying, schools had become much more responsive to the needs of children with an autism diagnosis than to those labeled with, for example, intellectual disability or some other kind of learning difficulty. Further, the autism label, again due to parent activism, had lost some of its stigma. It was known anecdotally that pediatricians and other professionals who held the power to label occasionally tilted the scale in the evaluations to ensure a child’s access to better programs and state services.

  In 2007, sociologist Richard Roy Grinker quoted a senior child psychiatrist at the National Institutes of Health as saying, “I’ll call a kid a zebra if that will get him the educational services I think he needs.” New York psychiatrist Isabelle Rapin, another prominent researcher in the field, was candid about this phenomenon. “I admit up front that I have contributed to the ‘epidemic’ in New York,” she wrote in 2011, citing the example of a four-year-old patient she had diagnosed in the early 1990s as having “a severe developmental language disorder with serious behavioral problems.” Years later, his father phoned, seeking an autism diagnosis for his son. Based on that conversation alone, and the leeway afforded by a newer, less restrictive definition of autism, Rapin agreed to provide the young man the label of autism.

  This so-called diagnostic substitution could certainly account for some of the apparent increase in autism numbers. In the 1970s and 1980s
, after the label “learning disabled” came into use, numbers for learning disabled children in school soared across the nation as, simultaneously, the numbers of students labeled “mentally retarded” dropped precipitously. This was due, in large measure, to children with mild intellectual disability being shifted into the category that carried less stigma.

  The question of whether a similar dynamic was pushing up autism numbers fascinated a young social scientist in training named Paul Shattuck in the early 2000s. Shattuck was a graduate student at the University of Wisconsin who was working toward a PhD in social welfare. He wanted to study “the relationship between the rising administrative prevalence of autism in US special education and changes in the use of other classification categories.” Shattuck did not analyze or directly assess any children for his study. Instead, using data he collected from the US Department of Education, he looked at the annual state-by-state counts of children, aged six to eleven, with disabilities in special education.

  Shattuck’s results, which he published in 2006, were attention-getting and controversial for a number of reasons. Seen in aggregate, the data he reported showed that, in forty-four states, big upticks in “administrative” prevalence of autism went hand in hand with downticks in the numbers for children labeled “cognitively impaired” and “learning disabled.” It was as if a group of children had walked from one end of a seesaw to the other. Shattuck’s conclusion was that, at least in these states, diagnostic substitution appeared to account for much of the apparent increase in autism.

  Shattuck’s study had weaknesses, which he admitted and others pointed out as well. His reliance on school-based data, whose very credibility was so much in question, was a problem. He also did not dig down to the local level, much less to the even deeper level where he could track individual kids who had made the move from one category to the other. He also reported that a pattern of diagnostic substitution did not emerge in a handful of states, including California, and he had no explanation for this.

 

‹ Prev