Why Trust Science?

Home > Other > Why Trust Science? > Page 23
Why Trust Science? Page 23

by Naomi Oreskes


  Moreover, it is not clear what the current increase in retraction rate means, because the notion and practice of retraction is a relatively recent one in the history of science. Historians have yet to study this matter closely, but it seems that the word “retraction” was until recently mostly used in the context of journalism.34 According to Steen et al. (2013) the earliest retraction of a paper indexed in PubMed—the largest index of biomedical publications—was the 1977 retraction of a paper published in 1973.35 To a historian, this relatively recent date is not surprising, insofar as faulty claims in science have traditionally been corrected by subsequent articles or ignored. Today we have claims of a retraction crisis, promulgated by websites such as “RetractionWatch.com,” complete with social media outreach @retractionwatch and a Facebook page: https://www.facebook.com/retractionwatch/.36

  RetractionWatch.com was founded in 2010, which suggests either that retractions have only of late become a problem or that they have only of late come to public attention. Here I am speculating, but I venture that the concept of retraction has gained traction in recent years because of heightened public scrutiny of science, which in turn has created conditions in which previously accepted practices of allowing erroneous claims to wither away are no longer considered adequate. If retractions were rare in the past but are common now, this may mean that science is more plagued by fraud or error. But it may simply signify that more people are watching, and mistakes that might in the past have been accepted as an unproblematic element of the progress of science are now being recast as unacceptable. In other words, for better or worse, it appears that we have changed our concept of what constitutes a problem in science.

  The fact that most of Professor Krosnick’s examples come from psychology and biomedicine is consistent with this interpretation. These are fields that generate a great deal of popular interest, and in which scientific results can have large social and commercial consequences. It is surely not a coincidence that the two studies he cites that found low rates of replication in biomedicine were undertaken by companies—Amgen and Bayer—with substantial financial stakes in scientific research outcomes. The competitive pressure of these high-stakes fields may indeed lead scientists to rush to publish work that turns out to be flawed. These are fields that are also heavily covered by mass media, who often run articles on single studies that may not be upheld by further work, leading perhaps to a biased impression of the overall state of science.

  I can think of no example of a prominent retraction in geomorphology or paleontology.37 There is, however, a recent highly publicized case in hydrology that merits consideration. A study published in a leading peer-reviewed journal found no effects on groundwater from hydraulic fracturing operations for gas production. The result garnered media attention because it seemed to undercut a major source of public concern about and source of opposition to fracking. However, a conflict of interest was later revealed: A gas company had partially funded the study, supplied the samples, and was involved in the study design, and one of the authors had worked for the company. The authors had not disclosed these potentially biasing factors.38 The journal undertook a review of the situation, and invited me to write a paper on the necessity of financial disclosure (which I did).39 Meanwhile, other researchers came to contrasting conclusions about the relations between proximity to gas wells and groundwater contamination.40 We do not know which side in this debate is correct scientifically, but we do know that one side had a conflict of interest that could have affected their results.41

  What do we conclude from all this? One obvious conclusion is that peer review is a highly imperfect process: bad and biased papers do get published. In the domain of endocrine-disrupting chemicals it has been shown that some published papers use strains of mice that are known to be insensitive to the effects under investigation.42 Why would someone do that? It could be accidental—perhaps the researchers were unaware that these strains were insensitive—or it could be deliberate. It could also be that, knowing their funders’ desires, researchers introduced a subconscious bias. Scientific papers are complex, and if the methods being used appear to be standard a reviewer might not examine them in detail. However, if reviewers are aware that the study’s funders had a vested interest in a particular outcome, they may pay just a bit more attention.

  We should also acknowledge that sometimes papers are wrongly retracted because of social or political pressure.43 These cases may be rare or they may not be. The available evidence makes it difficult to judge. And this leads to another question: Is this is a global problem or not? The papers Krosnick cites were all published in English-language journals, which suggests that the lion’s share were produced by English-speaking researchers or institutions. In the United Kingdom, recent changes in evaluation and funding of research universities have greatly increased pressures on scientists to increase their rate of publication. In the United States, funding rates have gone down dramatically compared to the 1960s, increasing the competitive pressure on researchers to produce results in a timely fashion in order to compete for the next round of funding. These factors contribute to pressure to produce—and not to take too much time checking results. One empirical test we might undertake would be to look at the country of origins of the researchers whose papers are being retracted.

  There may well be general problems in contemporary science born of the competitive pressure to publish quickly and move onto the next fundable project, but Krosnick has not made the case. Despite his suggestion that problems are rampant throughout science, most of the evidence he offers involves a few domains and is drawn from English-language journals. This does not prove that all is well elsewhere, but Krosnick’s argument slips from domains where problems are evident to domains where they are not.

  This lack of clear and quantitative evidence permits him to make what I consider the least supported of his comments, that “We don’t need to know whether a project’s funding comes from ExxonMobil or the National Science Foundation,” and that “the source of funding … [is] among the least of our problems. A huge amount of research has been funded by federal agencies and private foundations that have no real agendas other than supporting scientists’ making discoveries as quickly as possible.” Here Krosnick makes both a logical and an empirical error. Logically, he succumbs to the fallacy of the excluded middle. Even if it were demonstrated that the replication problem was pervasive, it would not exclude the possibility of other serious problems in research. Empirically, we have strong empirical evidence of adverse effects when research is funded by self-interested parties.

  It has been established that the tobacco industry long funded scientific research with the explicit goals of confusing the public, escaping legal liability by delaying epistemic closure, blocking public policy aimed at curtailing smoking, and, above all, maintaining corporate profitability by keeping smokers smoking.44 By the judgment of nearly all scholars who have studied the matter, the industry succeeded. The link between tobacco and cancer was demonstrated by the 1950s, but smoking rates in United States began to decline dramatically only in the 1970s, when the tobacco strategy began to be exposed and thereby to become less efficacious.45 While it is impossible to prove a counterfactual, the available evidence strongly suggests that if the tobacco industry had not interfered with scientific research and communication, more people would have quit smoking sooner and lives would have been saved.

  The tobacco story is egregious, but not unique. Scholars have demonstrated the effects of motivated industry funding in the realms of pesticides and other synthetic chemicals, genetically modified crops, lead paint, and pharmaceuticals.46 Recently, some have noted that a disproportionate amount of environmental research is now funded by the fossil fuel industry.47 While the effects of the latter are not yet entirely evident, it seems reasonable to suppose that at minimum this is influencing the focus of research projects (such as emphasizing carbon sequestration as an answer to climate change versus energy efficiency, for example), and co
uld be biasing the interpretation of scientific results.48

  There is an additional problem that merits attention, one that increasingly makes it difficult for observers—or even scientists themselves—to differentiate legitimate from facsimile science. (By this term I mean materials that carry the accoutrements of science—including in some cases peer review—but fail to adhere to accepted scientific standards such as methodological naturalism, complete and open reporting of data, and the willingness to revise assumptions in the light of data.)49 This is the problem of for-profit and predatory conferences and journals.

  In recent years, various forms of sham science have proliferated. Some of them appear to be motivated purely for profit, charging substantial fees to attend their conferences or publish in their journals, fees that many scientists pay out of their research funds. Last year, one facsimile science institution run by a Turkish family was estimated to have earned over $4 million in revenue through conferences and journals.50 Others may have disinformation as their intent, as they provide outlets for the tobacco, pharmaceutical, and other regulated industries to make poorly supported and false claims, and then insist that they are supported by “peer-reviewed science.”51

  A 2018 article “Inside the Fake Science Factory” discusses the findings of a team of researchers who analyzed over 175,000 articles published in predatory journals and found extensive evidence of published studies and conferences funded by major corporations, including the tobacco company Philip Morris, who have been found responsible in US courts for fraud based in part on their use of sham science to promote and defend their products.52 Other participating companies, according to the report, included the pharmaceutical company AstraZeneca and the nuclear safety company Framatone. When the predatory journals publish these companies’ research, the companies can claim that it is “peer reviewed,” thus implying scientific legitimacy. But the damage spills over into academia, further blurring the boundary between legitimate and facsimile science: the researchers found hundreds of papers from academics at leading institutions, including Stanford, Yale, Columbia, and Harvard.53 Whether the academic authors realize that they are publishing in sham journals is unclear; probably some do and some do not. The New York Times has called this phenomenon “fake academia.” The phenomenon is sufficiently recognized that Wikipedia has an entry for “predatory conferences.”54

  Facsimile science can also be used by start-up companies to generate a supposedly scientific basis for proposed drugs and treatments, such as the company First Immune, which “had published dozens of ‘scientific’ papers in these predatory journals lauding the effectiveness of an unproven cancer treatment called GcMAF.… The CEO of First Immune, David Noakes, will stand trial in the United Kingdom later this year for conspiracy to manufacture a medical product without a license.”55

  No doubt these activities are bad for science, insofar as they can generate confusion within expert communities, but in many cases, experts will likely see the flaws in many if not most instances of facsimile science. The greater risk, I believe, is that to the extent that the public learns about these corrupt practices, they may come to distrust science generally. It is essential for academic scientists to pay attention to these issues, particularly the question of who is funding their science and to what ends, to insist in all circumstances of full disclosure of that funding, and to reject any grants or contracts that involve non-disclosure or non-publication agreements. In this sense, Professor Krosnick and I agree: It is essential for scientists to keep their house in order.

  The difficulty of keeping our own house in order is underscored by one of the examples on which Professor Krosnick relies: the Amgen Pharmaceuticals replication study of papers published in Science, Nature, and Cell. These are top-flight journals that reject most of what is submitted to them and often boast about the importance of the work published in their pages, and, as Professor Krosnick notes, many scientists experience institutional pressure to publish in such prestigious journals. This may increase the odds that they exaggerate the novelty or significance of their results. But how reliable is the Amgen study?

  In his note 18, Krosnick oddly cites not the Amgen study itself, but a very interesting and useful study of replication in psychology, which highlights the tension in science between innovation and replication and the need for both. “Innovation points out paths that are possible; replication points out paths that are likely; progress relies on both.”56 This paper discusses the Amgen study, albeit in passing. Here is what those authors have to say about the latter: “In cell biology, two industrial laboratories [Amgen and Bayer] reported success replicating the results of landmark studies in only 11 and 25% of the attempted cases.… These numbers are stunning but also difficult to interpret because no details are available about the studies, methodology, or results. With no transparency, the reasons for low reproducibility cannot be evaluated.” Why didn’t the Amgen scientists offer details about their study? We cannot say, because the published article was not, in fact, a peer-reviewed study, but a “Comment” by two authors, one an Amgen scientist and the other an academic.57 It specifically address the problem of “suboptimal preclinical validation” in oncology trials, their recommendations equally specifically addressed to oncology research.58

  Cancer is both a dreadful and a scientifically complex disease and the authors offer numerous reasons why promising early results may not translate into effective treatments. They also note that “it was acknowledged from the outset that some of the data might not hold up, because papers were deliberately selected that described something completely new.”59 In other words, the sample was deliberately and selectively focused on novel results; it was not a general appraisal of reproducibility in biomedicine. Admittedly, the replication rate achieved—11%—was very low. But was it “shocking?” Given that the papers were selected because they were novel and surprising, it strikes me as unsurprising that on further inspection most of them did not hold up. As I have stressed throughout this book, scientific knowledge consists of bodies of theory and observations. One paper does not constitute—cannot constitute—a scientific proof. If pharmaceutical companies design clinical trials based on inadequately verified scientific claims, that is certainly problematic, but it’s not clear that the problem lies in science.

  I agree with Professor Krosnick that suboptimal practices and problems in science need to be openly acknowledged and addressed; that is precisely the purpose of this book! But if we overgeneralize the problem, and are cavalier about funding (or any other kind of bias), it will be difficult if not impossible to assess either the extent or the cause of the replication crisis.

  Professor Krosnick’s comment underscores the need for the overall project of which this book is a small part: academic history and philosophy of science. He suggests that scientists rush to publish and exaggerate novelty because they “want to publish … counterintuitive findings” and, paradoxically, “don’t want to admit that they didn’t predict their findings.” The first idea—that scientists should seek out counter-intuitive results that will upset the applecart of received wisdom—was a central idea for Karl Popper. The second idea—that we should be able to predict our findings—is the centerpiece of the hypothetico-deductive model of science. In chapter 1 we saw that both these models have serious logical flaws and neither works well as an accurate empirical description of scientific activity. If Professor Krosnick is right about what scientists want, then scientists are wanting a lot of wrong things. In that case, I hope that this book will help them to appreciate what science can and cannot give them.

  AFTERWORD

  Truthiness. Fake news. Alternative Facts. Since these Princeton Tanner Lectures were delivered in late 2016, the urgency of sorting truth from falsehood—information from disinformation—has exploded into public consciousness.1 Climate change is a case in point. In the United States in the past two years, devastating hurricanes, floods, and wildfires have demonstrated to ordinary people that the planetary clim
ate is changing and the costs are mounting. Denial is no longer just pig-headed, it is cruel. The American people now understand—as people around the globe have already for some time—that anthropogenic climate change is real and threatening.2 But how do we convince those who are still in denial, among them the president of the United States, who has withdrawn the United States from the international climate agreement and declared climate change to be a “hoax”?3

  Moreover, on many other issues our publics are as confused as ever. Millions of Americans still refuse to vaccinate their children.4 Glyphosate pesticides remain legal and widely used, even as the evidence mounts of their harm.5 And what about sunscreen?

  In this social climate, one might conclude that the arguments of this book are overly academic, that the social and political challenges to factual knowledge are so great that we should be focused on these dimensions and not on epistemology. As the coauthor of Merchants of Doubt—a book dedicated to explicating ideologically motivated opposition to scientific information—I might be expected to do just that. That would be a mistake.

  As Erik Conway and I showed in that book, the core strategy of the “merchants of doubt” is to create the impression that the relevant science is unsettled, the pertinent scientific issues still appropriately subject to contestation. If we respond on their terms—offering more facts, insisting that these facts are facts—then they win, because now there is contestation. When it comes to doubt-mongering, one cannot fight fire with fire. One has to shift the terms of debate. One way to do so is by exposing the ideological and economic motivations underlying science denial, to demonstrate that the objections are not scientific, but political. Another is by explaining how science works and affirming that, under many if not all circumstances, we have good reason to trust settled scientific claims. In Merchants of Doubt, Conway and I did the first. Here, I am attempting to do the second.

 

‹ Prev