by Lee McIntyre
The scientific attitude may be thought of as a spectrum, with complete integrity at one end and fraud at the other. The criterion that delineates fraud is the intentional fabrication or falsification of data. One can fall short of this either because one has made an unintentional mistake or because one was misleading but it did not rise to the level of fabrication or falsification. Into the latter class, I would put many of those “misdemeanors” against the “degrees of freedom” one has as a scientific researcher.7 The scientific attitude is a matter of degree; it is not all or nothing. Fraud may be thought of as occurring when someone violates the scientific attitude and their behavior rises to the level of fabrication or falsification. Yet a researcher can have an eroded sense of the scientific attitude and not go this far. (Cheating on the scientific attitude thus seems necessary, but not sufficient, for committing fraud.)
While it seems valuable to draw a bright line where one crosses over into fraud, this does not mean that “anything goes” short of it. Using the scientific attitude to mark off what is special about science should be able to help us with both tasks. In this chapter, I will argue that we may use the scientific attitude to gain a better understanding of what is so egregious about fraud and to police the line between fraud and other failures of the scientific attitude. In doing so, I hope to illuminate the many benefits that an understanding of the scientific attitude may offer for identifying and discouraging those shoddy research practices that fall just short of fraud as well. As we saw in chapter 5, the scientific attitude can help us to identify and fight all manner of error. But the proper way to do this is to understand each error for what it is. Some will find anything short of complete honesty in the practice of science to be deplorable. I commend this commitment to the scientific attitude. Yet science must survive even when some of its practitioners—for whatever reason—occasionally misbehave.
Why Do People Commit Fraud?
The stereotypical picture of the scientific fraudster as someone who just makes up data is not necessarily accurate. Of course this does occur, and it is a particularly egregious form of fraud, but it is not the only or even the most common kind. Just as guilty are those who think that they already know the answer to some empirical question, and can’t be bothered to take the time—due to various pressures—to get the data right.
In his excellent book On Fact and Fraud, David Goodstein provides a bracing analysis of numerous examples of scientific fraud, written by someone who for years has been charged with investigating it.8 After making the customary claim that science is self-correcting, and that the process of science will eventually detect the insertion of any falsehood (no matter whether it was intentional or unintentional),9 Goodstein goes on to make an enormously provocative claim.10 He says that in his experience most of those who have committed fraud are not those who are deliberately trying to insert a falsehood into the corpus of science, but rather those who have decided to “help things along” by taking a shortcut to some truth that they “knew” would be vindicated.11 This assessment should at least give us pause to reconsider the stereotypical view of scientific fraud.12 Although there are surely examples of fraud that have been committed by those who deliberately insert falsehoods into the corpus of science, what should we say about the “helpers”? Perhaps here the analogy with the liar (who intentionally puts forth a falsehood) is less apt than that of the impatient egoist, who has the hubris to short-circuit the process that everyone else has to follow. Yet, seen in this light, scientific fraud is a matter not merely of bad motives, but of having the arrogance to think that one deserves to take a shortcut in how science is done.
It is notable that concern with hubris in the search for knowledge predates science. In his dialogues, Plato makes the case (through Socrates) that false belief is a greater threat to the search for truth than mere error.13 Time and again, Socrates exposes the ignorance of someone like Meno or Euthyphro who thought they knew something, only to find out quite literally that they didn’t know what they were talking about. Why is this important? Not because Socrates feels that he himself has all the answers; Socrates customarily professes ignorance. Instead, the lesson seems to be that error is easier to recover from than false belief. If we make an honest mistake, we can be corrected by others. If we accept that we are ignorant, perhaps we will go on learning. But when we think that we already know the truth (which is a mindset that may tempt us to cut corners in our empirical work) we may miss the truth. Although the scientific attitude remains a powerful weapon, hubris is an enemy that should not be underestimated. Deep humility and respect for one’s own ignorance is at the heart of the scientific attitude. When we violate this, we may already be on the road to fraud.14
If some, at least, commit fraud with the conviction that they are merely hurrying things along the road to truth, is their attitude vindicated? No. Just as we would not vindicate the vigilante who killed in the name of justice, the “facilitator of truth” who takes shortcuts is guilty not just of bad actions but of bad intent. Even with so-called well-intentioned fraud, the deceit was still intentional. One is being dishonest not merely in one’s actions but in one’s mind. Fraud is the intentional fabrication or falsification of evidence, in order to convince someone else to believe what we want them to believe. But without the most rigorous methods of gathering this evidence, there is no warrant. Merely to be right, without justification, is not knowledge. As Socrates puts it in Meno, “right opinion is a different thing than knowledge.”15 Knowledge is justified true belief. Thus fraud short-circuits the process by which scientists formulate their beliefs, even if one guesses right. Whatever the motive, one who commits fraud is doing it with full knowledge that this is not the way that science is supposed to be done. Whether one thought that one was “inserting a falsehood” or “helping truth along” does not matter. The hubris of “thinking that you are right” is enough to falsify not just the result but the process. And in a process as filled with serendipity and surprise as science, the danger of false belief is all around us.
The Thin Crimson Line
One problem with judging fraud is the use of euphemistic language in discussing it. Understanding the stakes for one’s academic career, universities are sometimes reluctant to use the words “fraud” or “plagiarism” even in cases that are quite clear-cut.16 If someone is found guilty (or sometimes even suspected) of fraud, they are all but “excommunicated” from the community of scientists. Their reputation is dishonored. Everything they have ever done—whether it was fraudulent or not—will be questioned. Their colleagues and coauthors will shun them. Sometimes, if federal money is mismanaged or they live in a country with strict laws, they may even go to jail.17 Yet the professional judgment of one’s peers is often worse (or at least more certain) than any criminal punishment. Once the taint of fraud is in the air, it is very hard to overcome.18 It is customary for someone who has been found guilty of fraud simply to leave the profession.
One recent example of equivocating in the face of fraud is the case of Marc Hauser, former Professor of Psychology at Harvard University, who was investigated both by Harvard and by the Office of Research Integrity (ORI) at the National Institutes of Health. The results of Harvard’s own internal investigation were never made public. But in the federal finding that came out some time later, the ORI found that half of the data in one of Hauser’s graphs was fabricated. In another paper he “falsified the coding” of some data. In another he “falsely described the methodology used to code the results for experiments.” And the list goes on. If this isn’t fraud, what is? Yet the university allowed Hauser to say—before the federal findings came out—that his mistakes were the result of a “heavy workload” and that he was nonetheless willing to accept responsibility “whether or not I was directly involved.” At first Hauser merely took a leave of absence, but after his faculty colleagues voted to bar him from teaching, he quietly resigned. Hauser later worked at a program for at-risk youth.19
Although many may be
tempted to use the term “research misconduct” as a catch-all phrase that includes fraud (or is a euphemism for fraud), this blurs the line between intentional and unintentional deception. Does research misconduct also include sloppy or careless research practices? Are data fabrication and falsification in the same boat as improper data storage? The problem is a real one. If a university is trying to come up with a policy on fraud it might write somewhat differently than if it were writing a policy on scientific misconduct. As Goodstein demonstrates in his book, the latter can tempt us to include language about nonstandard research practices as something we may want to discourage and even punish but does not rise to the level of fraud. Goodstein writes, “There are many practices that are not commonly accepted within the scientific community, but don’t, or shouldn’t, amount to scientific fraud.”20 What difference does this make? Some might argue that it doesn’t matter at all. That even bad research practices like “poor data storage or retention,” “failure to report discrepant data,” or “overinterpretation of data” represent a failure of the scientific attitude. As previously noted, the scientific attitude isn’t all or nothing. Isn’t engaging in “deviation from accepted practices”21 also to be discouraged? Maybe so, but I would argue that there is a high cost for not differentiating this from fraud.
Without a sharp line, it may sometimes be difficult even for researchers themselves to tell when they are on the verge of fraud. Consider again the example of cold fusion. Was this deception or self-deception—and can these be cleanly separated?22 In his book Voodoo Science, Robert Park argues that self-delusion evolves imperceptibly into fraud.23 Most would disagree, because fraud is intentional. As Goodstein remarks, self-delusion and other examples of human foibles should not be thought of as fraud.
Mistaken interpretations of how nature behaves do not and never will constitute scientific misconduct. They certainly tell us something about the ways in which scientists may fall victim to self-delusion, misperceptions, unrealistic expectations, and flawed experimentation, to name but a few shortcomings. But these are examples of all-too-human foibles, not instances of fraud.24
Perhaps we need another category. Goodstein argues that even though the cold fusion case was not fraud it comes close to what Irving Langmuir calls “pathological science,” which is when “the person always thinks he or she is doing the right thing, but is led into folly by self-delusion.”25 So perhaps Park and Goodstein are both right: even if self-delusion is not fraud, it may be a step on the road that leads there. I think we need to take seriously the idea that what starts as self-delusion might later (like hubris) lead us into fraud. The question here is whether tolerating or indulging in self-delusion for long enough erodes our attitude toward what good science is supposed to look like.26
Moreover, even if we are solely concerned (as we are now) with intentional deception, it might be a good idea to examine any path that may lead there. It is important to recognize that self-delusion, cognitive bias, sloppy research practices, and pathological science are all dangerous—even if we do not think that they constitute fraud—precisely because if left unchecked they might erode respect for the scientific attitude, which can lead to fraud. But this does not mean that actual fraud should not be held distinct. Neither should there be any excuse for lack of clarity in university policies over what actually is fraud, versus what practices we merely wish to discourage. We are right to want to encourage researchers to have impeccable attitudes about learning from evidence, even if we must also draw a line between those who are engaging in questionable or nonstandard research practices and those who have committed fraud.
Any lack of clarity—or lack of commitment actually to use the word “fraud” in cases that are unequivocal—can be a problem, for it allows those who have committed fraud sometimes to hide behind unspecified admissions of wrongdoing and cop to the fact that they made mistakes without truly accepting responsibility for them. This does a disservice not only to the majority of honest scientists, but also to those who have not (quite) committed fraud, for it makes the community of scientists even more suspicious when someone has committed only a mistake (e.g., faulty data storage) yet has not committed full-blown fraud.27 If fraud is defined merely as one type of scientific misconduct, or we use the latter phrase as a euphemism for the former, whom does this serve?
If the scientific attitude is our criterion, when we find fraud we should name and expose it. This will act as a deterrent to others and a signal of integrity for science as a whole.
We must be vigilant to find and expose such wrongdoers, careful at the same time not to spread the blame beyond where it belongs and unintentionally stifle the freedom to question and explore that has always characterized scientific progress.28
When an allegation of fraud is made public, the punishment from one’s community can (and should) be swift and sure. But first it must not be covered up. We can acknowledge the pressures to do so, but these can tarnish the reputation of science. For when blame is not cast precisely where it should be—and some suspect that fraud is too often excused or covered up—the unintended consequence can be that an injustice is done to those who are merely accused of it. When fraud is selectively punished, those who are only accused may be presumed guilty. We see evidence of this in the previously mentioned scandals over reproducibility and article retraction. Scientific errors sometimes happen. Some studies are irreproducible and/or can be retracted for reasons that have nothing whatsoever to do with fraud. Yet if there is no sharp line for what constitutes fraud—and we retreat into the weasel words “research misconduct”—it is far too easy to say “a pox on all your houses” and look only at external events (like article retraction) and assume that this is a proxy for bad intent. The sum result is that some fraudsters are allowed to get away with it, while some who have not committed fraud are falsely accused. None of this is good for science.
When left to scientists rather than administrators, there is usually no equivocating about naming and punishing actual instances of fraud. Indeed, I see it as one of the virtues of using the scientific attitude to distinguish science from nonscience that it explains why scientists are so hard on fraud. If we talked more about the scientific attitude as an essential feature of science, this might allow scientists more easily to police the line between good and bad science.29 Some may wonder why this would be. After all, if the process of group scrutiny of individual ideas in science is so good, it will catch all types of errors, whether they were committed intentionally or not. But this misses the point, which is that science is precisely the kind of enterprise where we must count on most policing to be self-policing. If science were a dishonest enterprise where everyone cheated—and it was the job of peer reviewers to catch them—science would break down. Thus fraud is and should be recognized as different in kind from other scientific mistakes, for it represents a breach of faith in the values that bind scientists together.
The Vaccine–Autism Debacle
We are now in a position to consider the impact that scientific fraud can have not just on scientists but on the entire community of people who rely on science to make decisions about their daily lives. In 1998, Dr. Andrew Wakefield published a paper with twelve coauthors in the prestigious British medical journal Lancet, which claimed to have found a link between the classic MMR triple vaccine and the onset of autism. If true, this would have been an enormous breakthrough in autism research. Both the public and the press demanded more information, so along with a few of his coauthors Wakefield held a press conference. Already, questions were being raised about the validity of the research. As it turned out, the paper was based on an extremely small sample of only twelve children. There were, moreover, no controls; all of the children in the study had been vaccinated and had autism. While this may sound to the layperson like good evidence of a causal link, to someone with training in statistics, questions will naturally arise. For one, how did the patients come to the study? This is important: far from being a randomized double-bli
nd clinical study (where researchers randomly test their hypothesis on only half of a sample population, with neither the subject nor the researcher knowing who is in which half), or even a “case control study” (where investigators examine a group that has been naturally exposed to the phenomenon in question),30 Wakefield’s paper was a simple “case series” study, which is perhaps the equivalent of finding out by accident that several people have the same birthday, then mining them for further correlations. Obviously, with the latter, there can be a danger of selection bias. Finally, a good deal of the study’s evidence for a correlation between vaccines and autism was based on a short timeline between vaccination and onset of symptoms, yet this was measured through parental recollection.
Any one of these things would be enough to raise suspicions in the minds of other researchers, and they did. For the next several years, medical researchers from all over the world performed multiple studies to see if they could recreate Wakefield’s proposed link between vaccines and autism. A good deal of speculation focused on the question of whether thimerosal in the MMR shot might have caused mercury poisoning. In the meantime, just to be safe, several countries stopped using thimerosal while research was underway. But, in the end, none of the studies found any link.
Epidemiologists in Finland pored over the medical records of more than two million children … finding no evidence that the [MMR] vaccine caused autism. In addition, several countries removed thimerosal from vaccines before the United States. Studies in virtually all of them—Denmark, Canada, Sweden, and the United Kingdom—found that the number of children diagnosed with autism continued to rise throughout the 1990s, after thimerosal had been removed. All told, ten separate studies failed to find a link between MMR and autism; six other groups failed to find a link between thimerosal and autism.31