by Edward Humes
But another part of her was daunted and disappointed. She had wondered why her opponent at the district attorney’s office had suddenly gotten more businesslike and less amenable to even discussing a possible settlement. This report had to be the reason. This was Nordskog’s way of telling the prosecutor that the Parks conviction was righteous, and that the DA should do everything to keep the perpetrator of this evil plot in prison.
Cohen walked the report over to her boss, telling Justin Brooks there would be no settlement. They laughed a little at the accusation of being in the “exoneration business,” knowing that almost no one involved as experts in these cases made any money at all, because the innocence project had next to nothing to pay them. But it was a halfhearted laugh. The most prominent expert witness Cohen wanted to testify, John Lentini, was targeted for particular criticism in the report as a flawed and outrageous character who “has a strong personal bias against the very job and title that the [arson] investigators hold,” and who works as an “exoneration advocate . . . profiting quite well from his work in this sector.” Lentini, suggesting such attacks would become a distraction, told Cohen he’d prefer not to take the stand in the Parks case, and suggested she call on one of his co-authors who worked on the Parks report instead.
Cohen and Brooks are used to skepticism, hostility, and pushback from law enforcement when the attorneys claim there were flaws in a particular investigation. It becomes even more strident when they point to systemic flaws, such as Cohen’s claims about problems with the underlying forensic science in the Parks case. Making Parks a sort of “patient zero” for new lines of attack against cognitive bias and negative corpus was always likely to generate vigorous opposition. His young attorneys, Cohen especially, fume about this, but Brooks is unexpectedly sympathetic.
“Imagine you’re a prosecutor,” he said. “You spent months, maybe years, trying to put a person behind bars. And you are fundamentally a good person. . . . Most DAs are good people, trying to do their jobs every day. They view themselves as protecting the public, and they do that. Then think about the psychology of having us waltz in and say, ‘Hey, you were wrong.’ What’s it like to be told you put someone away for ten or twenty or thirty years, and you were just doing your job, just putting the evidence out there and getting a conviction, acting in good faith? But it just so happens the evidence was wrong. How do you persuade yourself to accept that? Some do. Some can’t.”
Ron Ablott unintentionally described the nature of this dilemma, and the barrier between law enforcement and the innocence project, in just a few poignant words. Asked while testifying against Jo Ann Parks if he could be wrong in his evaluation of the fire and its cause, he said no.
“If I am wrong,” he said, “then everything I have ever been taught and those people that have been taught before me would all be wrong.”
And that is, in effect, what the California Innocence Project attorneys were claiming in the Parks case: the assumptions fire investigators made then, and in some cases continue to make today, have been wrong. The new report from Nordskog doubling down on the original findings in the case, and upping the ante with claims of serial fire setting, made it clear that the prosecution was not going to take that lying down.
There would be no easy settlement in the Jo Ann Parks case, Cohen knew. They’d have to go to court. They’d have to go to war.
PART THREE
FIRE ON TRIAL
14
Sherlock Was Wrong
Clarence Hiller’s wife awoke with a start and shook him from sleep, sensing something amiss before she realized what it was: The room was too dark.
The hallway light outside their daughter’s room, the one they always kept burning through the night, had gone out. And so the dutiful husband and father of four rose from bed to have a look, to make sure all was well, and to begin the last thirty seconds of his life.
Near the top of the stairs Hiller met an unexpected obstacle: the shadowy form of a man. The intruder lunged and the two men, shoving and flailing, fell thudding down the staircase. At the landing, the stranger wrestled a pistol free from his waistband and fired two quick shots. Clarence Hiller, a man who simply wanted to put on the night-light so his daughter would feel safe in the dark, sighed once and died in his nightclothes, his wife’s shrieks still in the air. The burglar-turned-murderer pounded out the front door and onto the empty streets of predawn Chicago.
The police soon arrived and pieced together what had happened. The killer had climbed through the Hillers’ kitchen window, entering quietly but leaving behind a key bit of evidence: On a freshly painted railing beneath the window, investigators found the vivid imprints of four fingers of someone’s left hand embedded in the still-soft paint.
At 2:38 A.M., a police patrol spotted Thomas Jennings, a paroled burglar, limping in the darkness not far from the Hiller home. They frisked him and found a revolver in his pocket. None of the surviving Hillers got a good look at his face, but it made little difference. Four police experts agreed that, without doubt, Jennings’s left hand had grabbed that wooden rail—a clean, simple, straightforward resolution, case closed.
So when Simon Cole, a social sciences professor at the University of California, Irvine, began his lonely campaign almost two decades ago to prove that the science of fingerprint analysis is missing a crucial element—namely, the science part—he wasn’t merely going up against the police, the FBI, decades of television cop show heroes, and pretty much every black-robed judge in the land. Cole was taking on Clarence Hiller’s ghost.
For Hiller has both empowered and haunted the justice system for more than a century, his murder by Thomas Jennings in 1910 a seminal moment in the history of law enforcement. Jennings became the first person in America to be arrested, convicted, sentenced to death, and hanged through the triumph of dactylography—also known as the analysis of fingerprints. Hiller’s murder ushered in a new era of modern law enforcement, making possible the millions of fingerprint comparisons and cases that followed, from the Lindbergh kidnapping to the Night Stalker serial killings to the prosecution of the Al Qaeda operative who planned the millennium bombing of LAX.
But according to Cole—and a growing number of scientists, scholars, and legal experts horrified at high-profile fingerprint blunders—the courts have gotten it wrong for the past century. Since Clarence Hiller’s murder, the legal system has treated fingerprint comparisons as not simply invaluable, which they unquestionably are, but as essentially infallible—when they are anything but. Just ask Lana Canen of Elkhart, Indiana, convicted of murdering her neighbor in 2002 based on a botched fingerprint comparison. Or Stephan Cowans of Roxbury, Massachusetts, who spent six and a half years in prison for a 1998 murder after the killer’s fingerprints were falsely identified as his. And then there’s Brandon Mayfield, the Oregon lawyer who was arrested when FBI experts mistakenly linked his fingerprint to the 2004 Madrid terrorist train bombing—notwithstanding the fact that Mayfield has never been to Madrid in his life, and the fingerprints were subsequently found by Spanish experts to match a known foreign terrorist.
Had the bombing occurred in the United States, the error almost certainly would not have been detected. Mayfield would still be in prison because, as everyone seemed to believe up until then, FBI fingerprint examiners were infallible. That was literally both the bureau’s official position and the state of the law.
Cole, however, had been warning the FBI and other law-enforcement agencies for years that this would happen—that, in fact, it had been happening in lower-profile cases around the country since the day Hiller was hanged. The ability to match a known fingerprint to a partial, smudged, or faint latent lifted from a crime scene is an imperfect art at best, Cole says. It can only be deemed scientific and reliable after we know the error rate and also take steps to protect examiners from the sort of cognitive bias that infected the Madrid bombing fiasco, where the examiners knew supposedly incriminating
details about Mayfield in advance, or knew that a match had been declared by other examiners already. Fingerprint examiners, even top ones, can err when the pressure is great and the partial fingerprints are spotty. Yet their findings have been viewed as so persuasive and so overwhelming that even other fingerprint examiners are psychologically affected and tend to verify a declared match when they know another examiner they trust has found one. This is a classic example of expectation bias.
Although it’s done in courtrooms on a daily basis, there is no scientific basis for saying any one fingerprint match is 100 percent certain. Not even DNA matching makes such a radical claim, instead expressing a match in terms of probabilities—one in a hundred, one in a million, one in a billion. Those calculations are based on extensive genetic research on human populations and the frequency of certain genetic markers. DNA matching, then, is true science. Fingerprint matching, not so much.
Long written off as an outlier for challenging the conventional wisdom about fingerprints, Cole has gained respect over the years and his arguments have gained traction, particularly in the wake of the FBI’s embarrassment over the Madrid case. The errors were all the more humiliating because the FBI had just headed off several legal challenges to fingerprint evidence by convincing federal judges that its procedures had never yielded such a mistake, and that Cole’s concerns, therefore, were of little consequence.
“There was a fallacy at work: the belief that, because all fingerprints are unique, therefore fingerprint evidence is inherently reliable,” Cole has said. “It makes sense at first blush, but think about it: No two faces are alike, yet eyewitness identification is difficult and problem plagued. . . . The real question is not whether all fingerprints are different, but how accurate are fingerprint examiners at matching the small, fragmentary prints you find at crime scenes.”
Fingerprint examiners long countered that argument by saying there’s more than a century of “empirical research” to back up fingerprint accuracy. But the “empirical research” they are referring to is nothing more than the day-to-day use of fingerprint evidence by police agencies and its longstanding acceptance by the courts. That makes Cole’s point: Science is designed to uncover errors and find the truth; the courts are designed to follow precedent, which means judges will admit evidence not because it’s scientifically sound, but because some other court admitted the same sort of evidence in the past.
If the Madrid case raised questions about the dependability of the most ubiquitous and seemingly certain forensic science of fingerprinting, the release of a forensic science critique by the National Academy of Sciences in 2009, followed by an even more scathing federal report in 2016, created a sensation. Other forensic disciplines turned out to be in far worse shape than fingerprinting, those reports found. In the 2016 report, the President’s Council of Advisors on Science and Technology also looked at the “science” of bite-mark analysis, hair and fiber comparisons, shoe-print matching, and a host of other commonly used methods of tying criminals to crimes and found serious problems with all of them. Suddenly Cole looked less like a gadfly and more like a prophet. He is now a respected voice for forensic reforms and head of the National Registry of Exonerations, the nation’s leading clearinghouse for statistics and research on wrongful convictions, based at his Irvine campus.
As Cole had long argued, the presidential commission found there was no science in many of those “pattern matching” forensic practices—no analysis of how often the “experts” were mistaken, no guidelines for how to consistently make a match versus an exclusion, no data to show that, for example, bite marks left in a victim’s skin, or tire tread marks at a murder scene, or shirt fibers caught in a gun’s trigger guard, can validly be matched to a suspect in a crime. Without such foundational research, the subjective human process of matching fingerprints or bites or bullet markings or burn patterns can never be truly scientific, and the absence of scientific backing invites biased comparisons and junk science into the courtroom. Only DNA comparisons were deemed by the commission to be on sound scientific footing.
There were no significant studies of errors in fingerprint matching before 2011, and the efforts to make fingerprinting fully reliable and scientific have a long way to go. According to the 2016 report of the presidential commission, an FBI study found that false fingerprint matches could occur in one out of every 306 cases; another study found a stunning error rate as great as one out of eighteen cases.
The reason for the lack of scientific rigor is simple: Most forensic methods did not arise from scientific research but through a prosecutor’s or policeman’s need. Sir Arthur Ignatius Conan Doyle’s fictional consulting detective, Sherlock Holmes, popularized the idea of using scientific techniques to solve crimes, including fingerprints, and inspired some real police investigators to try to do the same. Or consider the history of bite-mark evidence, first used rather inauspiciously in the United States during the Salem witch trials. In the modern era, in 1974 in California, a detective noticed a homicide victim had a bite mark on his nose. The detective had a bright idea: Get a dentist in to see if he could match a suspect to the bite. He found three dentists to match teeth to marks on the victim’s nose. Walter Edgar Marx was then convicted of manslaughter. When he appealed, the California appellate courts held that bite-mark evidence was admissible despite what everyone, including the judges hearing the case, agreed was a complete lack of scientific backing for the technique. There was no controlled research, no real data, no testing, no validation, no blind testing for errors, no proof that human skin can accurately record a bite mark that can be matched to anyone. All the case had was a couple of dentists saying, yep, those teeth made that bite mark—a claim uttered with confidence and certainty, without ambiguity. And yet the wildly illogical judicial opinion in that case opened the floodgates for bite-mark prosecutions nationwide, with subsequent courts simply following California’s precedent and eventually asserting that the original case had established the validity of bite-mark “science,” though it established nothing of the kind. Such is the mesmerizing power of precedent, which is basically the court system’s way of saying, if we did it before, it must be correct, and history is all the proof we need. This is why the law is the opposite of science, which welcomes and encourages proof that history is wrong, which is why we no longer envision a flat earth being orbited by the sun. Meanwhile, the Marx case spawned not only a new and dubious tool for imprisoning people, but it also created a lucrative cottage industry for dentists who claimed a new specialty: forensic dentistry, which figured prominently in, among other big cases, the prosecution of serial killer Ted Bundy.
The junk science pedigree of bite-mark evidence led the California Innocence Project to take up the case of Bill Richards, who spent twenty-two years in prison for the murder of his wife at their Mojave Desert home. A prominent bite-mark expert matched a mark on his wife’s hand to Richards’s crooked teeth. The expert, Dr. Norman “Skip” Sperber, claimed no more than two out of a hundred people would have a bite anything like Richards’s. The testimony was enough to convict him.
The innocence project attorneys requested testing of DNA samples from the murder weapon, which did not belong to either Richards or his wife. Then Sperber recanted his testimony and admitted there was no scientific basis for his findings. There was literally no case left after that. But the San Bernardino County District Attorney fought and appealed for the next eight years to keep Richards behind bars anyway, arguing—successfully—that the courts should ignore claims that the bite-mark evidence was false because it had been an opinion, not direct testimony, and an opinion technically cannot be false. Therefore Richards’s trial technically had been “fair” under California law. The California Supreme Court went along with this reasoning—which meant there was no legal recourse for Richards, even though he had been proved innocent. It was so outrageous that Alex Simpson, associate director of the California Innocence Project, was able to lobby the stat
e legislature successfully for a new law that allows appeals based on false scientific expert testimony. Without that change, Richards would be in prison to this day. He finally walked out of prison in 2016.
Without that change in the law, Jo Ann Parks would have no case, either.
Meanwhile, bite-mark comparisons are now widely regarded by the scientific community as junk science at its worst. The presidential commission report opined that it would probably never become a legitimate science. And yet courts still admit bite-mark evidence around the country on a regular basis—because there is legal precedent to do so.
* * *
• • •
The history of fingerprints is just as science-free as bite marks. A prosaic presence in every B-movie murder plot and rerun of Law & Order, fingerprinting is so ubiquitous today that it is virtually synonymous with the concept of identity itself. Fingerprinting certainly represented a revolution when the handcuffs snapped onto Tom Jennings’s wrists after the 1910 murder of Clarence Hiller. The discovery that fingerprints could be read in a way that rendered people as unique as snowflakes did as much for crime detection as penicillin would do three decades later for health care.
Perhaps it’s not surprising, then, that the scientific reasoning, factual findings, even the wording of the venerable landmark legal opinion affirming the reliability of fingerprint evidence in the murder case against Jennings has been quoted, paraphrased, and depended upon ever since. The court gave an epic review of fingerprint history, how the ancient Egyptians used the pharaoh’s thumbprint as an official identifier, how contract signers and courts in India for decades had relied upon fingerprints as binding identifications, how the system of fingerprint analysis invented by Sir Francis Galton, cousin to Charles Darwin, had a firm scientific basis and, by 1911, had been used by the British police in “thousands of cases without error.”