Book Read Free

Why Trust Science?

Page 21

by Naomi Oreskes


  Historians of science recapitulated this framework, denying the tight linkages and intellectual affinities between science and technology and emphasizing instead their distinctive characteristics, independent institutional structures, and mostly non-overlapping populations or practitioners. Historians of technology rejected the premise that their object of study was inferior, but accepted—indeed promoted—the notion that it was separate and distinct from science. Both groups were content to see their professions proceed on parallel tracks, and many preferred it.

  I think Professor Lindee is completely correct about this. In my own recently completed magnum opus on the history of US Cold War oceanography, I have made a similar argument: that American oceanographers generally downplayed and sometimes denied the technological aspects of their work.1 Most of these technological elements were closely related to submarine warfare, including the delivery of weapons of mass destruction (to use our language).

  For the most part scientists were unable to discuss these relations because of security restrictions, so it is not always easy to discern how they felt about them, but some scientists explicitly expressed moral qualms. Others did not necessarily doubt the imperative of countering the Soviet threat, but nevertheless questioned the wisdom of hitching their scientific horses to the military wagon. One way to skirt the moral dimension was to insist that their work was not so hitched: that even though the US military was paying for it, scientists had retained control of their intellectual agenda.2 The ideological framework of pure science enabled them to claim—and perhaps believe—that the knowledge they produced was separate and distinct from anything the US government, through its armed services, might do with it. And so many of these oceanographers insisted they were pursuing “pure science,” even when this was manifestly not the case.

  Given this history, it is not surprising that many Americans do not have a clear conception of the historical or current relationships between science and technology. But I suspect that that our current situation is overdetermined: there may be many reasons why Americans are confused about science and technology. These include a near-complete lack of engineering education in primary and secondary schools so that most students, unless they study engineering in college, will have no exposure whatsoever to engineering and no sense of how engineers use science in their everyday work. Conversely, science is generally taught with little reference to its practical uses, and popular science writing perpetuates a myriad of myths that distress historians, the relation between science and technology being from my perspective the least of them.

  Thus, it seems to me unlikely that what scientists or historians claimed in the 1950s and ’60s is a primary factor explaining our current situation. To be sure, today’s senior scientists were raised by the Cold War generation and perhaps for this reason have often perpetuated the “separate and unequal” framework that Lindee laments. But the current generation has also pioneered biotechnology—which by its very name declares itself to be both science and technology—and routinely invokes technology as a justification for why we should believe in science.3 Yet this has not stopped religious fundamentalists from rejecting evolutionary theory nor free market fundamentalists from rejecting the facts of anthropogenic climate change.

  Yes, there is a tremendous amount of scientific knowledge embedded into everyday technologies, from roads and bridges to iPhones and laptops, and, indeed, frozen peas. Explaining this more clearly in public settings and in the classroom would remind people that we have direct evidence of science in action in everyday life. But I doubt that it would have the effect Lindee thinks, because even if people are well informed of the science embedded in their cell phones, it is not likely that will change their stance on climate change.

  The reason for this is clear and well established: Americans do not reject science, tout court, they reject particular scientific claims and conclusions that clash with their economic interests or cherished beliefs. Numerous studies have shown this to be true. The recent report of the American Academy of Arts and Sciences, for example, showed that most Americans do not reject science, overall, but do reject evolutionary biology if they interpret it as clashing with their religious views, or climate change if they see it as clashing with their political-economic views. Tellingly, many Americans are quite content to accept that DNA carries hereditary material even while rejecting evolution as a process that over time alters the DNA of populations.4

  Moreover, this pattern is not a uniquely American pathology, nor a particular feature of the present moment. In the twentieth century, Einstein’s path-breaking work on relativity was rejected by Germans who felt it threatened their idealist ontology.5 Vaccine resistance has been going on for just about as long as vaccination has. Smallpox vaccine noncompliance was so widespread in late nineteenth-century England that the Vaccination Act of 1898 included a “conscience clause” allowing parents to decline vaccination on grounds of personal belief.6

  If we explain to people how their cell phones work, they may very well feel better about those phones, but absent other interventions it will likely have little or no impact on their views of evolutionary theory. People compartmentalize.

  Perhaps most fatally to Lindee’s argument, we know that in recent decades various parties have cynically exploited the values that lead some Americans to reject climate science or evolutionary theory for their own social, political, or financial ends. Recent work suggests that people’s opinion and attitudes can be shifted if you show them—with concrete examples—that this is the case, and explain to them how disinformation works. John Cook and his colleagues call this “inoculation”: By analogy with vaccinations, if you expose people to a small amount of disinformation, in a controlled environment, you can generate future resistance.7

  Lindee’s proposed solution also conflates utility with truth. This is a point that philosophers have long emphasized. When we say that something works, we are making a claim about performance in the world. Cell phones enable us to talk to people in other locations without being connected by wires. Laptops enable us to store and access huge amounts of information in a very easy way. Vaccinations prevent diseases. Frozen vegetables enable us to eat foods long after they were harvested. No one doubts that these things do the things they claim to do, because we see that it is so. But it is another thing to claim that this proves the truth of the theories that underlie them.

  We might argue that every time we use a piece of technology we are performing a small but significant experiment, confirming that the technology works. (Or not, as the case may be. Frozen peas taste pretty lousy.) But this is a very different matter from confirming the underlying theory required to design, build, and use that technology. There are many reasons for this: suffice it here to consider three.

  The first is that that my cell phone does not reify a single theory. My phone is a complex expression of the many diverse scientific theories and practices that have gone into its development. These could include theories from electromagnetics, information technology, computer science, material science, cognitive psychology, and more, as well as various engineering and design practices. We might argue that the success of the cell phone affirms the correctness of all these theories and practices, but the conceptual link for the average user will be vague at best.

  The second is that Lindee’s premise—that the success of the technology is proof of the theory behind it—is an instance of the fallacy of affirming the consequent. It is the core of the logical objection to the hypothetico-deductive model that we discussed in chapter 1. (To recapitulate: If I test a theory and it passes my test, this does not prove that the theory is correct. Other theories may have predicted the same result. Or two [or more] errors in my experiment may have canceled each other out. It is a fallacy to assume that, because my theory worked in that instance, I have demonstrated it is true.)

  David Bloor has provided a third powerful argument against Lindee’s line of reasoning, which we might label the fallacy of theoreti
cal precedence. Consider airplane flight.

  We might think of airplane flight as one of the most obvious examples of the success of science. For centuries (perhaps longer), people dreamed of flying. Birds could fly, and so could insects. Some mammals could glide over considerable distances. Why not us? In the early twentieth century, clever inventors overcame the challenge of heavier than air flight, and soon we had commercial aviation. Today airplane travel is as familiar to most Americans as frozen peas. It is just the sort of everyday technology in which Lindee places epistemological aspiration. She is not alone. In the 1990s, when some scientists tried to defend realist concepts of scientific theory in the face of the challenge of social constructivism, airplane travel was a favorite invocation. How could planes possibly fly if the theory behind them was just a social construction? If it were something that scientists had settled upon for social rather than empirical reasons? If it wasn’t true?

  Confronting this argument, David Bloor uncovered a startling historical fact: that engineers were building planes before they had a working theory of flight. In fact, heavier-than-air machines were flying for years while existing aeronautical theory declared it impossible. As Bloor explains: “The practical success of the pioneer aviators still left unanswered the question of how a wing generated the lift forces that were necessary for flight.” The technological success of aircrafts did not signal an accurate theoretical understanding of aerodynamics.

  Perhaps the history of aviation is an odd exception, where technology got ahead of theory. But one of the contributions of the history of technology during the period when it was severed from the history of science was to show how many technologies, particularly prior to the twentieth century, developed relatively independently from theoretical science. Many technological innovations were empirical accomplishments whose relationship to “science” was only established retrospectively.8 Following Lindee, one might argue that one can nonetheless use the success of technology to build trust in current relevant science, irrespective of historically contingent relationships. Or one might suggest that what held true in the eighteenth and nineteenth centuries, and perhaps even in the early twentieth, is no longer the case, and it is simply implausible that our exquisitely complex modern technologies could work as they do if the theories behind them were not true. Perhaps this is the case. Only time will tell.

  * * *

  Marc Lange suggests that the question of why we should trust science can “easily induce a kind of dizziness or even despair.” Many potential answers collapse into circularity. For example, reasoning that invokes empirical evidence (such as my argument based on history) is itself a form of scientific (i.e., empirical) argument, in which case we are using scientific styles of reasoning to defend scientific styles of reasoning—the very definition of circularity. Moreover, if I say we should trust science as the warranted conclusions of experts, then we must ask on what basis is someone to be judged an expert? The answer, of course, is by other experts. So that is circular, too.

  Or is it? The signs of expertise—academic credentials, publications on the pertinent topic in peer-reviewed journals, awards and prizes—are evident to non-experts. Journalists have sometimes asked me, “How am I to tell if an alleged expert really is one, and not just a shill?” I reply, “One place to start is to find out what field they trained in and what publications they have in the domain.” Of course, Professor Lange is right to note that training is provided by other experts—it takes an expert to make an expert—and so it may appear that we have not escaped circularity. But there is an escape, because the social markers of expertise are evident to non-experts. This is a non-trivial point, because it is relatively easy to discern that most climate change deniers are not climate scientists, and that objections to evolutionary theory largely emerge from non-scientific domains. Neutral non-experts can identify experts and discern what they have (or have not) concluded.

  Social markers do not tell is if an expert is trustworthy, but they do tell us if the person is an expert and, more to the point, if a person claiming expertise does not possess it. Similarly, it is (or should be) easy to distinguish a research institution—like Princeton University or the Lawrence Livermore National Laboratory—from policy-driven think tanks, such as the American Enterprise or Discovery Institutes. The fact that journalists often fail to make such distinctions has more to do with deadlines than with epistemology.

  Of course, as I have stressed, experts can be wrong. (Our entire inquiry would be superfluous were this not the case!) As a human activity, science is fallible. Consensus is not the same as truth. Consensus is a social condition, not an epistemic one, but we use consensus as a proxy because we have no way to know, for sure, what the truth is.

  Moreover, the category of consensus is epistemically pertinent, because our historical cases have shown that where experts appear to have gone astray, typically there was a lack of consensus. Thus, we need to live with the fact that our indicators are asymmetric. We can never be absolutely positively sure that we are right, but we do have indicators that suggest when something might be wrong.

  This is why consensus is important—why it is so important to be able to identify and discount shills, celebrities, and perhaps well-meaning but misguided lay people—in order to clarify who is an expert, what they have to say, and on what basis they are saying it.

  Lange’s own solution to these dilemmas is to examine how past debates were resolved and agreement achieved. He shows us that even in the heat of debate, it can be possible to find an argument that persuades one’s scientific agonists. Galileo persuaded contemporaries of the superiority of his proposal for the relation between time and the distance traveled by a moving body by showing that only his proposal satisfied the demand of dimensional homogeneity, i.e., being independent of the particular (and presumably arbitrary) units applied. Lange concludes that “finding powerful reasons in a crisis is inevitably going to be difficult, but not impossible.”

  This argument, drawn from history, is a nice one, but (as Lange himself acknowledges) it still leaves us with the problem of generalizing from specific examples to science as a whole. Perhaps there is no way to so generalize, but Lange agrees that we ought to try. And that, of course, is the point of this book.

  * * *

  Ottmar Edenhofer and Martin Kowarsch address the relationship between scientific knowledge and public policy. In a world where dangerous anthropogenic climate change threatens human life, liberty, and property, the future of biodiversity, and the stability of liberal democracy, the relation between science and policy is of no small concern. Yet, the scientific consensus on climate change has not led to policy consensus. Indeed, they suggest that it cannot, because policy decisions involve many more dimensions than scientific findings do. In particular, policy decisions entail value choices above and beyond whatever values may have been embedded in the scientific work. Thus, they conclude, more work is needed on the “role of scientific expertise and the design of policies” to understand how we get from science to policy on urgent, contested, value-rich issues.

  All this is true, but a bit orthogonal to the point of this book.

  I posed the question—Why Trust Science?—because in recent decades, some groups and individuals have actively sought to undermine public trust in science as a means to avoid policy action that may be warranted by that science. This includes but is by no means limited to climate change. In the United States, it includes such diverse matters as the justification for compulsory vaccination, the hazards of persistent pesticides, and whether children raised by homosexual parents turn out as well adjusted as those raised by heterosexual ones (or at least no less well-adjusted). At least in the United States, where I have studied the matter most closely, it is confusing to say, as Dr. Edenhofer and Kowarsch do, that “conflicts about climate policy are thus not necessarily rooted in a lack of trust in climate science—but rather in disagreement about the design of climate policies.” My work has shown that (for the mo
st part) they are not rooted in a lack of trust in science at all! They are rooted in economic self-interest and ideological commitments, and are intended to stymie discussion of climate policies.

  As Erik Conway and I showed in our 2010 book, Merchants of Doubt, those who deny the findings of climate science do not (for the most part) have principled disagreements with scientists, economists, and environmentalists about the best policy to address anthropogenic climate change. Rather, they do not want any policy at all.9 Because of economic self-interest, ideological commitments to laissez-faire economics, or both, they do not wish to see any government action to limit the use or raise the price of the fossil fuels whose use drives climate change. What they want is preservation of the status quo ante. Recognizing that an honest accounting of the costs of climate change would almost certainly warrant a change in the status quo, they attempt to undermine public confidence in the science that supports such an accounting. Discrediting science is a political strategy. Lack of public trust in science is the (intended) consequence.

  Given this, it would be absurd for me to expect that articulating the reasons for trust in science would alter the positions of climate change deniers. What is not absurd is the hope that for some readers, this book will answer legitimate questions, even if those questions have at times been raised by people with a political agenda with which I strongly disagree.

 

‹ Prev