Why Trust Science?

Home > Other > Why Trust Science? > Page 14
Why Trust Science? Page 14

by Naomi Oreskes


  An abundant literature now documents how various parties have tried to create the impression of scientific uncertainty and debate as a means to block public policy that conflicts with their political, economic, and ideological interests.186 But these are not the only reasons that people attack science, insist there is no consensus, or promote alternative theories. People attack science to get attention, to sell alternative therapies, or because they are frustrated that science doesn’t have an answer to a problem that affects them.187 But it is a relatively simple matter to distinguish between scientific debate and other stuff: Scientific debate takes place within the halls of science and on the pages of academic journals; other stuff takes place in other places. Political debate takes place on the op-ed pages of newspapers. Grievances can be aired anywhere. Sadness, isolation, and frustration make people lash out. But if, like the editors of Hedgehog Review, we mischaracterize political debate, industry shilling, or social disaffection as scientific controversy, then our attempts to remedy the situation will almost certainly fail.

  Method

  In the episodes we have been discussing, problems arose because scientists discounted evidence that failed to meet their methodological preferences. In the early twentieth century, geologists rejected continental drift, because it did not fit their inductive methodological standards. Charles Davenport was attracted to eugenics in part because he wanted to make biology more rigorous by making it more quantitative. In the cases of dental flossing and the Pill, scientists discounted clinical evidence because of a lack of robust epidemiological data. This last point is particularly important, because in the contemporary world, we have come to rely on statistical analysis to a degree that has led many people to ignore important evidence, including the evidence of everyday experience that hormones affect our moods and flossing makes our gums less bloody. This doesn’t mean that everyday experience is superior to statistics; it is not. Good statistical studies are an essential part of modern science. It just means that statistics, like any tool, don’t work well in all cases and conditions and like any tool can be used well or badly (Krosnick, this volume).

  A focus on one method above all others is a kind of fetish. These cases suggest that some of the historical examples of “science gone awry” arose from what I designate methodological fetishism. These are situations where investigators privileged a particular method and ignored or discounted evidence obtained by other methods, which, if heeded, could have changed their minds.

  Experience and observation come in many forms. A good deal of evidence is imperfect, but that is no reason to ignore it. It is foolish to discount evidence that comes in messy forms simply because they are messy, particularly when the preferred methodological standard is difficult to meet or unsuitable to the question at hand. Randomized double-blind trials are powerful when they can be done, but when they cannot we should not throw our hands up and suggest we know nothing. There is no way to know how a drug makes people feel without asking about their feelings. There is no way to do a double-blind trial of flossing or nutrition. Imperfect information is still information.

  When we have independent information about causes and mechanisms—such as knowing that flossing reduces gingivitis, that hormonal contraception can affect serotonin receptors (and vice versa: that anti-depressants that target serotonin uptake can affect hormones), or that greenhouse gases alter the radiative balance of the planet—this information is crucial to helping us evaluate claims when our statistical information is noisy, inadequate, or incomplete. Mechanisms matter. When we know something about relevant mechanisms, there is no reason to play dumb.188

  Evidence

  It seems obvious to say, but scientific theories should be based on evidence. However, in two of the cases here, we saw scientists making affirmative claims on the basis of little or scant evidence. Dr. Edward Clarke built an ambitious and socially consequential theory about female capacity on the basis of seven patients. Critics at the time noticed not only that his data base was scant, but also that it was biased: his patients were all young women who had come to him suffering anxiety, backache, headache, and anemia, and who he described as pursuing educational or professional goals in a “man’s way.”189 (This included an actress and a bookkeeper; only one was actually a student in a woman’s college.)

  In hindsight it is more than obvious that the symptoms he described—headache, backache, anxiety—could have any one of a number of causes. They are also afflictions that often occur in men, yet Clarke offered no evidence that these ills were more common in women, or more common among women who were educated than in those who were not. He presented his theory in the framework of hypothetico-deductivism, yet he failed to pursue the required next step: to determine if his deduction were true. Most conspicuously, he provided no evidence that these women’s reproductive systems were weakened or that their fertility had been decreased. When women physicians and educators pointed out these flaws, Clarke ignored them. His theory was elegant, but could only be sustained by ignoring evidence available to him at that time.

  Values

  The role of values in science is a much-mooted issue, and the stories told here show how easily prevailing social prejudices may be instantiated into scientific theory. Scientists have not always been on the side of the angels. Anyone who values science must acknowledge this.

  The traditional impulse of scientists has been to say that in cases such as eugenics, science was “distorted” by values. But historians of science, particularly but not only feminists, have noted the ways in which values are broadly infused into scientific life and not always in adverse ways. It is true that racial and ethnic prejudice infused eugenic thinking, and the sexism in Edward Clarke’s work is not difficult to discern. But values also played a role in the critiques of those theories. Socialist values were crucial to some geneticists’ critique of eugenic thinking; feminist values informed Mary Putnam Jacobi’s identification of the theoretical and empirical inadequacies of the Limited Energy Theory. Barbara Seaman was a journalist, not a scientist, but her feminist values motivated her to follow up on the “anecdotes” she had heard, to seek out the doctors who could confirm the substance in these stories, and to highlight information that some doctors were discounting.

  This, it seems to me, is the most important argument for diversity in science, and for diversity in intellectual life in general. A homogenous community will be hard-pressed to realize which of its assumptions are warranted by evidence and which are not. After all, just as it is hard to hear your own accent, it is hard to identify prejudices that you share. A community with diverse values is more likely to identify and challenge prejudicial beliefs embedded in, or masquerading as, scientific theory.

  Critics of efforts to make science more diverse sometimes insist that the only relevant standard in science is “excellence.”190 Science, they insist, is a meritocracy in which demographic considerations are misplaced. These critics seem to think that calls for diversity are merely political; that there is no intellectual value in building diverse communities. The stories told here refute that idea. They suggest that diversity can result in a more rigorous intellectual outcome by fostering critical interrogations that reveal embedded social prejudice.

  Admittedly, this claim cannot be proved, because in science we have no independent metric to judge epistemic success. We cannot stand apart from our truth claims and independently determine if they are true; nor can we compare the “truth-production” of more and less diverse communities. But in a domain where there are metrics of success—namely, business—rigorous studies have demonstrated that diverse teams yield better outcomes, in terms both of qualitative values such as creativity and quantitative outcomes such as sales. If we know that diversity is beneficial in the commercial workplace, why would we not presume that it would be beneficial in the intellectual workplace as well? Moreover, we saw in chapter 1 that there is an epistemological basis for presuming that diversity does benefit science. The examples presented i
n this chapter support that claim. Thus we may conclude that scientific communities that that are “politically correct”—in the sense of taking seriously the value of diversity—are more likely to yield work that is scientifically correct.

  Considering the role of values also helps explain what we could call the misapplication of theory and the asymmetry of application. In hindsight, there is an obvious theoretical flaw in Clarke’s work: while presented as an application of thermodynamics, it was actually a misapplication of the theory because conservation of energy applies to closed systems. The human body is not a closed system: it is sustained and supported through nutrition. Life is possible because organisms are not closed systems, so Clarke’s use of thermodynamics was logically fallacious. It was also asymmetrical, because for some odd reason it only applied to women. Admittedly, Clarke had an explanation for this: he suggested that the female contribution to reproduction was uniquely demanding, and he did allow the possibility that overexertion could be harmful to both boys and men as well. Yet, while stressing the claim that if a woman was educated, her uterus would shrink, he evidently never paused to ask: if men were educated, what part of their anatomy would shrink?

  Eugenicists likewise applied their theories asymmetrically. As Muller and Haldane stressed, the target of their attention was the working class. There were drunkards, gamblers, and lay-abouts among the wealthy, yet few eugenicists advocated sterilization of underperforming rich white men.

  Humility

  If the history of science teaches anything, it is humility. Smart, hard-working, and well-intentioned scientists in the past have drawn conclusions that we now view as incorrect. They have allowed crude social prejudice to inform their scientific thinking. They have ignored or neglected evidence that was readily available. They have become fetishists about method. And they have successfully persuaded their colleagues to take positions that in hindsight we see as incorrect, immoral, or both.

  Many of the scientists in these stories were driven by a genuine desire to do good: to promote an effective means of birth control, for example, or protect women from something they honestly believed would harm them. But their failings are a reminder that anyone engaged in scientific work should strive to develop a healthy sense of self-skepticism. Edward Clarke was a supremely confident man. So was Charles Davenport. So were many of the early advocates of the contraceptive pill. Wegener’s critics accused him of “auto-intoxication,” and I daresay we have all encountered scientists who are overly enamored of themselves. It seems to me that individual scientists, if they care about truth, should be mindful of this problem and not ride roughshod over their colleagues.

  If the social view of science is correct, however, then it may not matter too much if a particular individual is auto-intoxicated. Inevitably there will be arrogant individuals in science, but so long as the community is diverse and alternative views are available, and so long as the community as a whole finds the means for all its members to be heard, things are likely to go well. Nonetheless, collectively scientists should still bear in mind that—whatever conclusions they come to and however they come to them—even with the best practices and the best of intentions, there is always the possibility of being wrong, and sometimes seriously so.

  Conclusion: Science as a Form of Pascal’s Wager

  In evaluating a scientific claim that has social, political, or personal consequences there is one more question that needs to be considered: What are the stakes of being wrong in either direction? What is the risk of accepting a claim that turns out to be false versus the risk of rejecting a claim that turns out to be true?

  Knowing there is a risk of depression, if a healthy woman decides to take the Pill she can quickly stop taking it if the risk materializes. Pill-induced depression generally clears up quickly, so for many women the risk is modest and worth taking. Similarly, dental floss is cheap and only takes a few minutes a day to use. If it turns out to have little benefit, little has been lost. But some issues are not so easily resolved.

  Consider anthropogenic climate change. Despite fifty years of sustained scientific work, communicated in tens of thousands of peer-reviewed scientific papers and many hundreds of governmental and nongovernmental reports, many people in the United States are still skeptical of the reality of climate change and the human role in it. The president has doubted it, as have members of Congress, business leaders, and the editorial page of the Wall Street Journal. Rejecting centuries of well-established physical theory and reams of empirical evidence regarding matters such as sea level rise and the intensification of extreme weather events, others have suggested that while anthropogenic climate change might be a real thing, it is inconsequential and might even be beneficial.191

  As a historian of science, mindful of the Limited Energy Theory and eugenics and the history of hormonal contraception—mindful of the difficulties of evaluating dental floss—and above all mindful of the political ideals that geologists brought to bear in evaluating continental drift—I have never assumed that trust in science is always or even usually warranted. I have always felt that it is fair to ask: What is the basis for any scientific claim? Should we trust scientists?

  We cannot eliminate the role of trust in science, but scientists should not expect us to accept their claims solely on trust. Scientists must be prepared to explain the basis of their claims and be open to the possibility that they might be wrongly dismissing or discounting evidence. If someone—be it a fellow scientist, an amateur professional, a journalist, or an informed citizen—has a credible case that evidence is being discounted or weighed asymmetrically, this should concern us. Scientists need to remain open to the possibility that they have made a mistake or missed something significant.192 The key point is that the basis for our trust is not in scientists—as wise or upright individuals—but in science as a social process that rigorously vets claims.

  This does not mean that scientists must spend time and energy continuing to prove and reprove conclusions that have already been established beyond a reasonable doubt, nor refuting claims that have been refuted. As Thomas Kuhn argued more than half a century ago, to the extent that science can be said to progress, it is because scientists have mechanisms by which they reach agreement and then move on. Perhaps the most salient aspect of the continental drift debate is that it was reopened, which occurred when a new generation of scientists developed new lines of pertinent evidence.193

  We can reframe this problem in terms of Pascal’s Wager. No matter how well-established scientific knowledge is—no matter how strong the expert consensus—there will always be residual uncertainty. For this reason, if our scientific knowledge is being challenged (for whatever reason), we might take a lead from Pascal and ask: What are the relative risks of ignoring scientific claims that turn out to be true versus acting on claims that turn out to be false?194 The risks of not flossing are real, but not inordinate. The risks of not acting on the scientific evidence of climate change are inordinate.195

  Admittedly, the advocates of eugenic social policies considered the risks of not implementing eugenic social policies to be extremely high. That, of course, was their interpretation of the scientific evidence. But as we have seen, there was no consensus on that evidence. So we are back to the importance of consensus. If we can demonstrate that there is no consensus among relevant experts, then it becomes clear that we have a weak basis for public policy. This is the reason why the tobacco industry tried for so long to claim that the science regarding the harms of tobacco was unsettled: if it really had been, then they might have been right to insist that tobacco control was premature.196 Similarly, if there were no scientific consensus about anthropogenic climate change, then the fossil fuel industry and Libertarian think tanks might be right to ask for more research. This is why consensus studies are relevant and important: Knowing there is a consensus does not tell us what to do about a problem like climate change, but it does tell us that we almost certainly have a problem.197

  If we can
establish that there is a consensus of relevant experts, then what? Can we be confident in accepting their conclusions and using them to make decisions? My answer is a qualified yes. Yes, if the community is working as it ideally should. That is a substantial qualification. As Brian Wynne has put it, if we are to respect and trust science, then “it becomes evident why the quality of its institutional forms—of organization, control and social relations—is not just an optional embellishment of science in public life, but an essential component of critical social and cultural evaluation.”198

  The history of science shows that there is no guarantee that the ideals of an open, diverse community, participating in transformative interrogation, will be achieved. Often it will not be (although the consequences of failing to meet this ideal may not always be profound or even significant). Historian Laura Stark notes that the National Bioethics Advisory Commission recommends that one-quarter of the members of the boards that review human subjects research should not be affiliated with the institution at which the research is being done, but this goal is rarely achieved.199

  How do we determine if a scientific community is sufficiently diverse, self-critical, and open to alternatives, particularly in the early stages of investigations when it is important not to close off avenues prematurely? How do we evaluate the quality of its institutional forms? We must examine each case on an individual basis. Many scientists were wrong about continental drift, but that does not mean that a different group of scientists are wrong today about climate change. They may be or they may not be. We cannot assert either position a priori.

 

‹ Prev