Burned

Home > Other > Burned > Page 18
Burned Page 18

by Edward Humes


  “This method of identification is in such general and common use that the courts cannot refuse to take judicial cognizance of it,” the Illinois Supreme Court boldly concluded in the Jennings case, reciting what seemed to be, even then, the long and solidly established history of dactylography. And in the years that followed, every state, federal, and local court has agreed in untold millions of cases, embracing fingerprint comparisons as a kind of forensic Holy Grail that has withstood the test of time.

  The problem is, none of the original reasoning, which has been relied upon by the courts ever since, holds water: The ancient Egyptians didn’t use fingerprints as identifiers, but as royal seals, no more relevant to the Jennings case than hieroglyphics.

  The practice of using fingerprints on contracts in India was not based on science, but on superstition, undertaken on a whim by a British magistrate in Jungipoor, who felt his Indian subjects would believe their personal imprint on a document was more mystically binding than a mere signature.

  As for Galton, he did invent a revolutionary system of analyzing the loops and whorls of fingerprints that is still in use today (and that bears his name), though he conceived of it not as a crime-busting tool, but as a means of classifying the genetically superior among us. Galton, in his most-remembered work, was the father of eugenics, his theory of controlled breeding and noble birth that has been used over the years to justify such human travesties as forced sterilizations, pogroms, genocide, and the Holocaust.

  And as for those thousands of cases in Britain in which fingerprints were used without error prior to 1911? The truth is, no one knew then—or now—if there were any errors or not. The police experts were simply taken at their word. No one checked, other than like-minded police examiners already told that a match had been found.

  One of the few serious courtroom challenges in the modern era to the validity of fingerprint comparisons took place in Philadelphia in 2002 under relatively new federal rules for admitting scientific testimony, the Daubert test. Daubert v. Merrell Dow Pharmaceuticals was a products liability lawsuit in which the parents of two children born with serious birth defects sued the drug maker, blaming their children’s condition on the anti-nausea drug Bendectin. In 1993, the Supreme Court helped resolve the case by crafting a new legal test to allow evidence Merrell had sought to exclude under old courtroom standards that considered only whether a scientific principle was generally accepted (a standard that favors old science over cutting edge). The Daubert test required a more detailed scientific inquiry by trial judges, including whether a forensic method can be or has been tested, whether it’s been peer reviewed, and whether error rates have been established.

  The new standard, which applies to all federal courts and has been adopted by some states, has been used to expose systemic errors in forensics—fallacies and frauds that almost always favored prosecutors over defendants. Crime lab scandals were uncovered around the country—a serologist in West Virginia who falsified hundreds of tests to help sustain convictions, a pathologist who faked autopsy results in more than twenty death-penalty cases, even a host of lab problems at the FBI. Since 1993, Daubert has been used to challenge the validity of virtually every branch of forensics, including the first re-examination of the reliability of fingerprint evidence since the Hiller murder.

  It came in a fairly run-of-the-mill cocaine-trafficking case against a Philadelphia man named Llera Plaza. US district judge Louis Pollak, considering testimony from a number of experts, including Simon Cole, ruled that while FBI fingerprint examiners could testify in the case, they were barred from claiming that any particular person absolutely matched a particular print. There would be no more claims of 100 percent accuracy. More scientific study was needed, the judge decided, before the luster could be restored to fingerprint matching.

  This was as mild a slap as the judge could have delivered. Because no error rates for fingerprint comparisons had ever been calculated—indeed, the FBI was in denial that errors ever even happened—the judge conceivably could have thrown fingerprint evidence out of court completely if Daubert standards were followed to the letter. Yet fingerprint examiners nationwide were flabbergasted by this seemingly modest and scientifically sound ruling. If sustained on appeal, it could undo hundreds, if not thousands, of criminal cases, they feared. The FBI mounted a concerted effort to overturn the decision, bringing in a legion of experts to argue that in eight decades of fingerprint matching, the FBI had never made a false match, that fingerprint errors discussed by Cole and others had been made by police agencies other than the FBI, and that Pollak’s decision would have disastrous consequences for public safety.

  In a rare move three months later, Pollak overturned himself, saying he had been wrong in his initial decision and that the FBI examiners could continue to make their usual, full-certainty fingerprint identifications in court. The Madrid bombing case showed Pollak had been right the first time—this was exactly the sort of error the FBI had previously said could not happen when pressuring Judge Pollak to overturn himself and to abandon the requirements of the Daubert test. So Daubert, intended to balance the power of old precedent with the tendency of new science to disprove old ideas, is not really doing the job. Judges are no better than juries at telling junk science from sound science, and when they are skeptical, that skepticism is usually aimed at experts for defendants more than for the prosecution. In 2017, Judge Donald Shelton of Michigan told the sixty-ninth annual conference of the American Academy of Forensic Sciences that most judges allow any and all prosecution expert testimony without question, and are reluctant to block even the most specious forensic evidence, such as bite-mark comparisons, so long as other courts are allowing it. “If it was left to judges,” Shelton said, “the earth would still be flat.”

  Now the other pattern-matching forensic disciplines, none of them on as firm a footing as fingerprints and many of them as weak as bite marks, face all the same problems and questions: Nonscientific and mythic origins. Lack of error rates. Lack of data. Vulnerability to bias. According to the data gathered by the National Registry of Exonerations, of the 2,250 men and women wrongfully convicted then exonerated of murder or other serious felonies between 1989 and 2018, 541 were convicted with flawed or misleading forensic evidence—about one out of four.

  Almost all of the fancy technology and forensic acumen that has been lionized and romanticized for years by such Hollywood concoctions as the CSI television empire have been shown to be seriously flawed. In that fantasy world, every case is solved in sixty minutes by the certainty of science. In the real world, the science of hair and fibers, bite-mark comparisons, ballistic comparisons, footprint matching, fingerprint matching, and arson investigations has been tainted by systematic error, false assumptions, and theories that turned out to have never been scientifically tested. Ironically, the newest, most powerful, and most scientifically rigorous forensic technology—DNA fingerprinting, one of the only forensic practices that actually arose from the world of science instead of police work—has unintentionally highlighted the lack of scientific rigor in other forensic disciplines.

  The dirty secret of forensic science’s flaws has left the justice system alternately in a quiet panic or in massive denial over the implications of the vanishing aura of CSI infallibility. So far, the only checks on bad science in the courtroom are individual, one-case-at-a-time legal challenges mounted by individual defendants and innocence projects. Unlike medical science, where every drug, device, procedure, and method faces years of regulation, peer review, testing, and proof of efficacy, forensic science remains an unregulated wild west. Law-enforcement agencies control it all, with no central agency or authority to separate real science from junk science, and no incentive for police or prosecutors to ban even the most outlandish forensic experts so long as they help win cases. Indeed, law-enforcement organizations, including the US Department of Justice and the National District Attorneys Association, have fought forensic refor
ms, just as the FBI pushed back against Judge Pollak’s ruling.

  The field of arson investigation is no exception.

  * * *

  • • •

  The same year Jo Ann Parks’s trial began, a group of experienced fire and police arson investigators from around the country assembled at the Federal Law Enforcement Training Center in Glynco, Georgia, for a new kind of advanced training on determining areas of origin at fire scenes. The goal was to expose fire investigators to the challenge posed by flashover, which suddenly had become a source of controversy and legal wrangling.

  Two “burn cells” had been set up outside—stand-alone fake rooms, twelve-by-fourteen feet with a standard door used in homes across America, eight-foot ceilings, and plywood and gypsum board construction. These compartments were equipped to be set aflame, to reach 600 degrees, to achieve flashover and full-room involvement, then burn for two minutes before being extinguished. The trainees were kept away until the fire had been put out, just as they might arrive to investigate a house fire after firefighters had put out the blaze.

  The blackened burn cells looked eerily like the small, charred rooms inside the Parks house. With the smell of burnt wood still heavy in the air, the trainees were asked to walk into the burn cells, observe the damage, and determine which quadrant—which fourth of the room—contained the area of origin for the fire. That was it: not the area of origin itself, but just the quarter of the room that contained it.

  Most of the trainees felt it was child’s play.

  Later in the training, instead of empty burn cells, the test rooms were furnished as living rooms, bedrooms, and kitchens.

  In each exercise, fewer than one out of ten of these veteran investigators got the right quadrant. Consider that: Some of the nation’s leading local arson investigators, asked to identify where a fire started in an empty room, failed to do so more than nine out of ten times.

  This was not because their execution of techniques and principles was bad. They did what they’d been taught to do. It was because the techniques themselves failed most of the time when flashover occurred. Severe damage that occurred long after the fire started was time and again mistaken for the area of origin, because that’s what flashover can do.

  These results were not an anomaly. Similar exercises were conducted twice a year for the next twelve years, and the classes consistently produced the same results. According to the instructors, the odds of just guessing at random would produce better results (one out of four) than the application of traditional fire investigation methods using the area of greatest damage as the likely origin.

  That’s remarkably consistent terrible performance—not by neophytes, but by journeyman investigators who were doing real-world arson investigations as a full-time job, drawing conclusions about fires using the very same methods that failed so miserably at Glynco. Except back home and out in the field, people were being denied insurance coverage, charged with crimes, and sent to prison based on the work of these same investigators using these same techniques.

  Yet no detailed records of the failures in those burn cells were kept, and no one thought to inform the wider world of fire investigators—and convicts like Parks—that the conventional methods of fire investigation appeared to be broken when it came to flashover and building rooms that are fully involved with flames.

  Finally, a now-retired fire investigation instructor with the Bureau of Alcohol, Tobacco, Firearms and Explosives, Steve Carman, sounded the alarm. He published an article in 2008 about the problem that flashover posed for traditional methods of fire investigation, which included the high number of errors in identifying the correct area of origin in the training center’s burn cells. He described a 2005 exercise at Glynco with the burn cells in which only three out of fifty-three trained fire investigators identified the correct area of origin. When the same group repeated the exercise, again only three got it right—a different three. That’s a success rate of only 5.7 percent in identifying the area of origin; no one nailed the actual point of origin.

  Carman said these results showed there was a desperate need for additional research and training for investigators in how to correctly analyze a post-flashover fire scene, and that failure to take flashover into account—as happened in the original Parks investigation—invited errors.

  A typically law-and-order ATF career guy, Carman had been investigating fires and training fire investigators for decades, and had been widely respected by his colleagues. Yet after he published his article about the burn-cell exercises, many of his peers treated him as a traitor for airing the dirty laundry.

  “Oh my, a lot of guys were really pissed off at me,” Carman recalls of the aftermath. “There was backlash about going public with this. But it was time for people to take their heads out of the sand and wake the heck up. We had a problem.”

  The fact that systemic problems existed with arson investigations, particularly with practices in wide use at the time of the Parks fire and for a decade or more after it, seemed obvious to Carman and others. Reforms that began with the National Fire Protection Association’s guidelines, which have been updated every few years, have been imposed on arson investigators since that time, often over the objections of many field investigators. Right around the time of the Parks fire, an entire laundry list of arson myths was exposed as junk science—myths that had been used for many years to convict people of crimes or deny them insurance payouts for lost homes and property. This included the glass crazing and bedspring melting that John Lentini exposed in the Oakland firestorm, but also other indicators of arson that turned out to be false: concrete spalling, pour patterns on floors, “alligatoring” of wood surfaces, the depth of charring as a reliable way to calculate how fast a fire burned. All of these indicators were used as proof that an accelerant had been used to make a fire burn faster and hotter, but it turned out that all of these “indicators of arson” could also be caused by flashover and other noncriminal causes. It has taken decades for these myths to be put aside, and the question of how to properly investigate a flashover fire remains highly controversial and a source of bitter legal battles. It was predicted that the end of discredited methods and indicators of arson and the creation of new guidelines would make it harder to determine a precise cause in many fires, leading to more findings of “undetermined” causes. FBI crime statistics show this may be happening: The number of fires found to be arson in the United States declined by 44 percent between 2001 and 2016. But if law-enforcement officials and prosecutors are looking at new cases differently, they have been opposed to reopening old cases and releasing convicts sent to prison on the basis of old arson myths. Admitting systematic errors would risk exposing tens of thousands of criminal and civil arson cases to the possibility of being reopened all at once. So each case has to be fought one at a time—if it gets fought at all.

  Even so, similar concerns about flashover effects and other questionable arson investigation practices being raised in the Jo Ann Parks case have led to exonerations in other arson cases in Texas, Massachusetts, Pennsylvania, New York, Florida, and elsewhere. Cases are being reopened involving men, women, and juveniles imprisoned for years for arson and murder on scientific principles that have turned out to be unscientific. There have been at least thirty-six exonerations in felony arson cases so far, a majority of them murder cases, in which up-to-date science revealed no crime had been committed at all. There might have been a thirty-seventh, but the determination came posthumously—after Cameron Todd Willingham, in a Texas case with eerie similarities to Parks’s experience, was executed in 2004. Nine prominent fire experts later reported that the case against Willingham had no basis in science. One expert hired by the Texas Forensic Science Commission, Craig Beyler, found that the key expert witness in Willingham’s trial gave evidence that was “hardly consistent with a scientific mind-set and is more characteristic of mystics or psychics.”

  The arson pros
ecution of Kristine Bunch of Decatur County, Indiana, is a classic example of blunders, bias, and myth in arson investigation—a case that also has remarkable similarities to the Parks case. Bunch was exonerated after serving seventeen years of a sixty-year prison sentence after being convicted of killing her three-year-old son by locking him in a bedroom and setting their home on fire.

  At the time of the fire in 1995, Bunch was a single twenty-one-year-old mother living with her son in a mobile home. She suffered mild injuries in the blaze, but her son perished. As in the Parks case, an allegedly halfhearted rescue attempt by a young mother was a key element of the case against Bunch.

  Three days after the fire, she was arrested. A state fire investigator had ruled out potential accidental causes, did not consider flashover in his analysis, and determined that there were two separate points of origin based on his reading of burn patterns. He used negative corpus to find that Bunch started the blaze with matches or a lighter and used charcoal lighter fluid or some other accelerant to make sure the fire spread. A laboratory analysis was said to have confirmed the presence of a petroleum-based accelerant at both the points of origin.

  The evidence at the time was deemed by the authorities to be overwhelming. Bunch, who had become pregnant before the start of her trial, was treated in court like a monster. The incensed trial judge accused Bunch of arranging the pregnancy because she thought it would boost her chances of receiving leniency. “It will not,” the judge assured her. “You will not raise that child.”

 

‹ Prev