More recently, researchers have begun looking at how neuroscientific evidence can bias outcomes in criminal trials. Brain scans may have a particularly strong influence on jurors. In one experiment, significantly more participants found a defendant guilty after reading that he had failed an fMRI lie-detection test than when the trial summary included evidence of deceit based on polygraph or thermal-imaging results. One of the reasons fMRI may carry such weight is that we may conflate its use in a criminal context with scanning done to identify medical conditions, like tumors or strokes. Diagnostic imaging, we assume, is diagnostic imaging.
Particularly troubling is the fact that even trained judges do not seem to be immune to the special allure of neuroscience. In one study, 181 state trial judges read about Jonathan Donahue, who, during an attempted robbery of a Burger King, ended up beating a restaurant manager repeatedly in the head with a pistol—causing permanent brain damage—because “the fat son-of-a-bitch wouldn’t stop crying.” According to testimony from a psychiatrist, Donahue met the criteria for classification as a psychopath. In addition, half of the judges were presented with testimony from “a neurobiologist and renowned expert on the causes of psychopathy” who offered results from a genetic test. The test revealed that Donahue carried a particular gene linked with antisocial behavior—in particular, with brain-development problems that result in the absence of “a normal violence-inhibition mechanism.” The other half of the judges did not receive this additional expert testimony.
You might expect that the psychopathy diagnosis would do all of the work at sentencing—it doesn’t seem as if knowing the underlying neurological cause would add much. You might also expect that the judges would view the scientific evidence as a reason to lock Donahue up for longer. Psychopaths present a serious future threat, after all.
But you’d be wrong. Bringing in expert testimony on the defendant’s antisocial condition turned out to be a double-edged sword. Judges rated the evidence of psychopathy as aggravating overall, as expected, but the additional neurobiological explanation of psychopathy resulted in significantly reduced sentences—about 7 percent shorter, on average. The neurobiologist’s testimony changed how judges thought about the defendant’s condition. With the source of Donahue’s behavior located in the brain, he suddenly seemed less in control of his actions and less blameworthy.
The question, of course, is whether the expert opinion here ought to be given such weight, and many neuroscientists are concerned, not only because applying general research on genetics and brain function to a particular individual is dubious, given the state of the existing science, but also because it is unclear why explaining the particular biological mechanism of psychopathy ought to have an impact on sentencing. After all, the judges had already been provided with testimony explaining the psychology of psychopathy and how resistant it is to treatment. The key point is that the person has an impairment for which he is not responsible. It should not make any difference whether that impairment is the result of being emotionally abused as a child or possessing a certain gene or experiencing a series of concussions.
—
During three days of pretrial testimony concerning the former Army Ranger Gary Smith’s fMRI evidence, Judge Johnson was presented with some of the research we’ve just discussed. Now he had to decide whether to allow Professor Haist’s findings to come before the jury. It was never going to be an easy decision—here was a man accused of murdering a friend and roommate; his life was in the balance. All he asked was that jurors be able to consider this potentially exculpatory evidence. If you were accused of murder and you had evidence that you were innocent—even if it was based on unsettled science—shouldn’t you at least be able to show it to the people deciding your fate?
No, said Judge Johnson, the brain scans had to be kept out of the courtroom. Gary Smith was ultimately convicted of involuntary manslaughter and sentenced to twenty-eight years in prison, with the case now working its way through the appellate courts.
Judge Johnson took the path of caution. But that does not mean that all judges will resist the allure of neuroscience-based lie detection and wait for the field to develop. There has already been a major shot across the bow, half a world away. On June 12, 2008, in Mumbai, India, Judge Shalini Phansalkar-Joshi handed down the first-ever decision convicting someone of murder based in part on a brain scan.
It started as a love story. Aditi Sharma and Udit Bharati met when they were just teenagers. They dated as students at Mahantsingh Engineering College, and after getting engaged they headed off together to business school at the Indian Institute of Modern Management in Pune. But Aditi soon broke off the engagement, having fallen for a fellow MBA classmate, Pravin Khandelwal. Aditi and Pravin dropped out of school and moved to another state—disappointing, no doubt, for Aditi’s parents, but nothing out of the ordinary.
Six months later, though, Aditi was back in Pune, allegedly arranging to meet Udit at a McDonald’s. Udit would not survive to the next night.
According to prosecutors, Aditi poisoned Udit with arsenic-laced candy that she offered him as the two sat talking. The turning point in the case came when Aditi consented to being hooked up to an EEG after an initial polygraph test suggested her involvement in Udit’s death. As she sat with thirty-two electrodes attached to her head, technicians read aloud various innocuous statements, along with first-person statements about key facts in the case: “I bought arsenic.” “I met Udit at McDonald’s.”
According to investigators administering the Brain Electrical Oscillations Signature test (BEOS), the electrical signals coming from the surface of her scalp were damning. When she heard the details of the crime, specific regions of the brain involved in reliving past experiences became active. Aditi had not just heard about the murder; she had “experiential knowledge” of it—she was the murderer. And what is particularly astonishing is that for the investigators to reach this conclusion, Aditi didn’t have to say a word.
The BEOS results provided key evidence for Judge Phansalkar-Joshi; she included no less than nine pages in her decision explaining and defending them. Because Aditi was later released on bail by the Bombay High Court pending her appeal (which may take years to work out), it is easy to write this case off as an anomaly likely to be remedied by the existing judicial process. But that would be a serious mistake, because lawyers around the world are currently working to bring in similar evidence in an array of different contexts.
In the United States, courts have resisted allowing neuroscience-based lie-detection technology to come before a jury, but judges have permitted brain images to be used to challenge a witness’s testimony and to mitigate a defendant’s responsibility for a crime. Over the last decade, criminal defense attorneys have introduced neurological evidence in hundreds of cases, and the trend is increasing. So, for example, when Grady Nelson killed his wife and stabbed and raped her eleven-year-old daughter, his lawyer convinced a Miami judge to allow a neuroscientist to testify about abnormalities in Nelson’s brain. That evidence appears to have saved his life. Two jurors who came out against his execution reported that it was the neuroscience that had made them vote as they did. “It turned my decision all the way around,” one of them explained. “The technology really swayed me….After seeing the brain scans, I was convinced this guy had some sort of brain problem.” If either of those jurors had voted differently, Nelson would have been sentenced to death.
Even when lie-detection technology is barred from the courtroom, it can still have a powerful influence on our justice system. Although the polygraph has been kept out of criminal trials for years, it has managed to play a critical part in many convictions. Polygraphs are regularly used in criminal investigations at both the federal and the local level. As we saw in Juan Rivera’s wrongful conviction case, detectives can even lie to suspects about the results in hopes that it will prompt a confession, and because the polygraph has the patina of “hard science,” it can have that effect even on those who are i
nnocent. Kevin Fox, for instance, was told by police that he would be cleared as a suspect in the rape and murder of his young daughter, Riley, if he passed a polygraph. The test administrator, however, lied to Fox, allegedly telling him that the polygraph was absolutely reliable and admissible in Illinois state court and that it showed that he was the perpetrator. With that seemingly devastating strike against him, Fox, like Rivera, didn’t see any other way forward but to confess to a horrific crime he did not commit.
Polygraphs are a routine part of probation and parole processes, including the regulation of released sex offenders. In New Jersey, for example, almost all of the state’s 5,600 supervised sex offenders must be given at least one polygraph each year. Hooked up to the machine, they may be asked whether they’ve had any unsupervised contact with minors, abused drugs, or felt attracted to a sixteen-year-old co-worker at the fast-food place where they work. Failing the test can mean losing a job, being required to wear an electronic ankle bracelet, or ending up back in prison. In certain states, they may even be given a “sexual history disclosure examination” that covers their entire life, which could conceivably prompt additional prosecution if other crimes are revealed.
The science-fiction world in which the government tries to read your thoughts is already upon us, and it is no great leap to assume that EEGs and fMRIs will be the next generation of tools to be exploited. These technologies are starting to have an impact on our legal system, and those who stand to gain from using them are not going to wait for approval from the scientific establishment any more than they have with the polygraph.
We need to prepare judges to make the hard calls on admitting new lie-detection evidence. The existing instructions just aren’t up to the task. The main problem is not the criteria the Supreme Court has established to guide courts in deciding whether expert testimony should be admissible: Is the research falsifiable and testable? Is it peer reviewed? Has it been accepted in the scientific community? What is the likely or known error rate? These are the proper questions to ask. The problem is that most judges aren’t able to answer them. In a recent survey, only 5 percent of state trial court judges were able to explain the meaning of “falsifiability,” and an even smaller percentage understood “error rate.” Moreover, in an experiment involving Florida circuit court judges, researchers found that the judges’ decisions to admit or exclude expert testimony were not based on the quality of the science at all. With numerous legally relevant scientific breakthroughs on the horizon and increasingly sophisticated methods, that is reason for serious concern.
Federal and state judiciaries should commit to the rigorous training of judges in assessing expert testimony. If a forklift operator has to reach a basic level of competency with the standard equipment for his job, why shouldn’t a judge? A lack of proficiency can bring devastating consequences in both cases. And making scientific literacy mandatory doesn’t demean judges; it’s a testament to the importance of what they do. As we’ll see in the next chapter, judicial education provides other benefits as well, and there’s existing precedent. Indeed, in the last few years, leading researchers have drafted guides to help judges handle neuroscientific evidence, and a handful of seminars have been held around the country. But we need to greatly expand and bolster this work—and we might consider starting earlier, by offering more classes at law schools focused on science in the legal sphere. The first law and neuroscience coursebook was just published, and over twenty schools now have classes—like the one I teach—focused on law and the mind sciences.
Training, though, is only part of the solution. We also have to decide whether there are some kinds of expert testimony on emerging science that judges should not consider. We could very well bar mind-reading testimony from the courtroom altogether, prohibit the police from interrogating suspects using fMRI, or require a special type of warrant in order to “search” a person’s brain for memories of a crime.
For centuries, we’ve espoused a robust commitment to protecting privacy. Under traditional English law, even when authorities had a warrant, the government was not allowed to access a person’s private papers. In the United States, the Fourth and Fifth Amendments were designed to protect the privacy of all men and women, even those accused of the most heinous crimes. Yet many of us think differently today about privacy: we willingly share our inner secrets, beliefs, hopes, and emotions on social media to casual acquaintances and often strangers. Many of my students shrug their shoulders when I point out the way corporations now collect and analyze demographic information and personal data on buying habits, Internet usage, travel history, and countless other details of our lives in order to predict our behavior. So what if a company knows that I’m gay or pregnant or considering leaving my husband before I’ve told anyone else? Revelations that the U.S. government has been widely spying on foreign heads of state and its own citizens have not seriously imperiled those in power. The truth is that many Americans may not see looking into a suspect’s or defendant’s mind without consent as a “fundamental affront to human dignity,” as the ACLU recently characterized it.
But we—all of us, not simply a few neuroethicists—need to stop and consider the matter: we owe it to future generations to make an active choice on the proper balance between privacy and security. Given the different cultural backgrounds that people bring to the table, our best bet may be to think about things from the perspective of someone who knew she would one day be accused of a crime but did not know whether she would be innocent or guilty. With that outlook, we wouldn’t go down the road to routine police questioning using brain scans until the science was very settled indeed. We might, however, be supportive of a defendant’s right to bring in even imperfect proof of truthfulness. There’s no reason that prosecutors and defendants should have to meet the same hurdles when it comes to lie-detection evidence. And in a period of less than absolute certainty, it seems fitting that mind-reading technology should serve as a shield rather than a sword.
8
UMPIRES OR ACTIVISTS?
The Judge
John Roberts was sitting on the D.C. Circuit Court of Appeals when President George W. Bush nominated him for the Supreme Court in July 2005. It had been more than a decade since the last justice joined the Court, and the stakes were further raised when Chief Justice William Rehnquist died that September. There were now two vacancies, and the president wanted Roberts to fill the preeminent position. Standing in his way were confirmation hearings before the Senate Committee on the Judiciary.
Once upon a time, for a man with Roberts’s sparkling credentials—dual Harvard degrees, years of experience as a government lawyer and in private practice, thirty-nine cases argued before the Supreme Court, and a federal appellate judgeship—the Senate would not have presented such a daunting challenge. But the landscape had changed with the failed nomination of Robert Bork in 1987.
Like Roberts, Bork had a distinguished résumé and was sitting on the D.C. Circuit when President Ronald Reagan came calling. He was buoyed by significant conservative excitement—a dream pick, to many. And despite the efforts of liberal groups to discredit him, public opinion was in his favor on the eve of the hearings. But a series of missteps, with the microphone on and cameras flashing, turned the tide. In the end, fifty-eight senators voted no, providing the most decisive loss in the history of Supreme Court nominations.
Roberts and his team were determined not to make similar blunders, and they did not disappoint. Where Bork had appeared humorless and arrogant—stating, at one point, that he wanted to serve on the Supreme Court because it would be “an intellectual feast”—Roberts played the part with humility and charm. Where Bork had weighed in on hot topics—criticizing the reasoning behind Roe v. Wade—Roberts avoided taking clear positions on contentious issues. Arguably the savviest move, though, came in how he framed being a judge:
“Judges are like umpires. Umpires don’t make the rules; they apply them.”
Roberts certainly wasn’t the first pers
on to use the metaphor, but in the fall of 2005, it seemed a particularly compelling notion to many denizens of Capitol Hill and their constituents back home. Good judges call balls and strikes. They don’t pitch or bat. They put their backgrounds, experiences, and allegiances to the side and apply the clear law to the clear facts. Bad judges, by contrast, let their personal opinions about policy infect their rulings. They are unelected activists, advancing their own ends by “interpreting” things where there is no room for interpretation and by legislating from the bench.
In setting out the two implicit categories and then claiming to be one of the good, objective judges—with “no agenda” and “no platform”—Roberts allayed the fear that had doomed Bork, of seating a conservative “ideologue” with a grand scheme. More important, he engendered a world in which those who followed—Samuel Alito, Sonia Sotomayor, and Elena Kagan—were forced to acknowledge the basic truth of the umpire frame or face the possibility of rejection.
Justice Sotomayor’s path through the Senate was made much more tenuous by the simple fact that she had previously voiced her belief that “personal experiences affect the facts that judges choose to see,” and that judges might be unable to be truly impartial “in all, or even in most, cases.” In Sotomayor’s estimation, the law was often ambiguous, making interpretation unavoidable: “Whether born from experience or inherent physiological or cultural differences…our gender and national origins may and will make a difference in our judging.”
To many in the Senate and in the American public, this conception was utterly unacceptable. As the Republican senator John Cornyn, of Texas, explained, Sotomayor’s professional success was not going to carry her through confirmation: “The real question is how she views her role as a judge: whether it is to advance causes or groups or whether it is to call balls and strikes.” While a few Democratic senators criticized the analogy for failing to capture the full nature of a judge’s role, Sotomayor fell into line, offering reassurance that she was not an activist and would just apply the law.
Unfair Page 17