The Scientific Attitude

Home > Other > The Scientific Attitude > Page 29
The Scientific Attitude Page 29

by Lee McIntyre


  Naturally, the ID theorist would push back and hope to engage in a lengthy debate about the origins of the eye and the missing link (too bad that evolutionists have an explanation for both).88 But this raises the important question of whether such debates are even worth having—for do ID theorists even accept scientific standards of evidence? As pseudoscientists, ID theorists seek to insulate their views from refutation. They misunderstand the fundamental principle behind scientific reasoning, which is that one should form one’s beliefs based on empirical evidence and commit to changing one’s beliefs as new evidence comes forward.

  And they have other misconceptions about science as well. For one, ID theorists often speak as if, in order to be taught in the science classroom, evolution by natural selection would have to be completely proven by the evidence. As we have seen, that is not how science works. As discussed in chapter 2, no matter how strong the evidence, scientific conclusions are never certain. Ah, the ID theorist may now object, isn’t that why one should consider alternative theories? But the answer here is no, for any alternative theories are also bound by the principle of warrant based on sufficient evidence, and ID theory offers none. Note also that any “holes” in evolutionary theory would not necessarily suggest the validity of any particular alternative theory, unless it could explain them better.

  Our conclusion? ID theory is just creationist ideology masquerading as science. The objection that evolution is not “settled science,” and therefore that ID theory must at least be considered, is nonsense. One’s place in the science curriculum must be earned. Of course, it is theoretically possible that evolutionary theory could be wrong, as any truly scientific theory could be. But this is overwhelmingly unlikely, as it is supported by a plethora of evidence from microbiology up through genetics. Merely to say, “Your theory could be wrong” or “Mine could be right” is not enough in science. One must offer some evidence. There must be some warrant. So even though it is theoretically true that evolution could be wrong, this does not somehow make the case for ID theory any more than it does for the parody religion of Pastafarianism and its “scientific” theory of the Flying Spaghetti Monster, which was invented as brilliant satire by an unemployed physics student during the Kitzmiller trial to illustrate the scientific bankruptcy of ID theory.89 Indeed, if scientific uncertainty requires the acceptance of all alternative theories, must the astronomer also teach Flat Earth theory? Should we go back to caloric and phlogiston? In science, certainty may be unobtainable, but evidence is required.90

  Another predictable complaint of ID theorists is that natural selection is “just a theory.” Recall this myth about science from chapter 2. But to say that evolution is a theory does not dishonor it. To have a theory with a strong basis of evidential support—that backs up both its predictions and explanations, and is unified with other things that we believe in science—is a formidable thing. Is it any wonder that ID theory cannot keep up?

  Yet let us now ask the provocative question: what if it could? What if there were some actual evidence in support of some prediction made by ID theory? Would we owe it a test then? I think we would. As noted earlier, even fringe claims must sometimes be taken seriously (which does not mean that they should be immediately inserted into the science classroom). This is because science is open to new ideas. In this way, some fringe theorists who bristle at the dismissal of their work as pseudoscience may have a legitimate complaint: where there is evidence to offer, science has no business dismissing an alternative theory based on anything other than evidence. But this means that if “pseudoscientific” theories are to be taken seriously, they must offer publicly available evidence that can be tested by others in the scientific community who do not already agree with the theory. So when they meet this standard, why don’t scientists just go ahead and investigate?

  Sometimes they do.

  The Princeton Engineering Anomalies Research Lab

  In 1979, Robert Jahn, the dean of the School of Engineering and Applied Science at Princeton University, opened a lab to “pursue rigorous scientific study of the interaction of human consciousness with sensitive physical devices, systems, and processes common to contemporary engineering practice.”91 He wanted, in short, to study parapsychology. Dubbed the Princeton Engineering Anomalies Research (PEAR) lab, they spent the next twenty-eight years studying various effects, the most famous being psychokinesis, which is the alleged ability of the human mind to influence physical events.

  A skeptic might immediately jump to the conclusion that this is pseudoscience, but remember that the proposal was to study this hypothesis in a scientific way. The PEAR team used random number generator (RNG) machines and asked their operators to try to influence them with their thoughts. And what they found was that there was a slight statistically significant effect of 0.00025. As Massimo Pigliucci puts it, although this is small, “if it were true it would still cause a revolution in the way we think of the basic behavior of matter and energy.”92

  What are we to make of this? First, it is important to remember our statistics:

  Effect size is the size of the difference from random. Suppose you had a fair coin and flipped it 10,000 times and it came up heads 5,000 times. Then you painted the coin red and it came up heads 5,500 times. The 500 difference is the effect size.

  Sample size is how many times you flipped the coin. If you flipped a painted coin only 20 times and it came up heads 11 of them, that is not very impressive. It could be due just to chance. But if you flip the coin 10,000 times and it came up heads 5,500 times, that is pretty impressive.

  P-value is the probability that the effect you see is due to random chance. The reader will remember from our discussion in chapter 5 that p-value is not the same as effect size. The p-value is influenced by the effect size but also by the sample size. If you do your coin flips a large number of times and you still get a weird result, then there will be a lower p-value because it is unlikely to be due to chance. But effect size can also influence p-value. To get 5,500 out of 10,000 coin flips is actually a pretty big effect. With an effect that big, it is much less likely to be just due to randomness, so the p-value goes down.

  Before we move on to the PEAR results, let’s draw a couple of conclusions from the coin flip example. Remember that p-value does not tell you the cause of an effect, just the chance that you would see it if the null hypothesis were true. So it could be that the red paint you used on the coin was magic, or it could be that the weight distribution of the paint influenced the coin tosses to land on heads more often. One can’t tell. What we can tell, however, is that having a larger number of trials means that even a very small effect size can be magnified. Suppose you had a perfectly normal looking unpainted coin, but it had a very small bias because of the way it was forged. As you flipped this coin one million times, the small bias would be magnified and would show up in the p-value. The effect size would still be small, but the p-value would go down because of the number of times you’d done the flip. Conclusion: it wasn’t a fair coin. Similarly, a large effect size can have a dramatic effect on p-value even with a small number of flips. Suppose you took a fair coin and painted it red, and it came up heads ten times in a row. That is unlikely to be due to random chance. Maybe you used lead paint.

  So what happened in the PEAR lab? They did the equivalent of flipping the coin for twenty-eight years in a row. The effect size was tiny, but the p-value was minuscule because of the number of trials. This shows that the effect could not have been due to random chance, right? Actually, no.

  Although their hearts may have been in the right place—and I do not want to accuse the folks at the PEAR lab of any fraud or even bad intent—their results may have been due to a kind of massive unintentional p-hacking. In Pigliucci’s discussion of PEAR research in his book Nonsense on Stilts, it becomes clear that the entire finding depends crucially on whether the random number generators were actually random.93

  What evidence do we have that they were not? But this is the
wrong question to ask. Remember that one cannot tell from a result what might have caused it, but before one embraces the extraordinary hypothesis that human minds can affect physical events, one must rule out all other plausible confounding factors.94 Remember that once we painted the coins, we could not tell whether the effect was due to the “magic” properties of the paint or its differential weight on one side of the coin. Using Occam’s razor, guess which one a skeptical scientist is going to embrace? Similarly, the effect at the PEAR lab could have been due either to psychokinesis or a faulty RNG. Until we rule out the latter, it is possible that all those years working with RNGs in the Princeton lab do not show that psychokinesis is possible at all, so much as they show that it is physically impossible to generate random numbers! As Robert Park puts it in Voodoo Science, “it is generally believed that there are no truly random machines. It may be, therefore, that the lack of randomness only begins to show up after many trials.”95

  How can we differentiate between these hypotheses? Why are there no random machines? This is an unanswered question, which goes to the heart of whether the psychokinetic hypothesis deserves further research (or indeed how it could even be tested). But in the meantime, it is important also to examine other methodological factors in the PEAR research. First, to their credit, they did ask other labs to try to replicate their results. These failed, but it is at least indicative of a good scientific attitude.96 What about peer review? This is where it gets trickier. As the manager of the PEAR lab once put it, “We submitted our data for review to very good journals, but no one would review it. We have been very open with our data. But how do you get peer review when you don’t have peers?”97 What about controls? There is some indication that controls were implemented, but they were insufficient to meet the parameters demanded by PEAR’s critics.

  Perhaps the most disconcerting thing about PEAR is the fact that suggestions by critics that should have been considered were routinely ignored. Physicist Bob Park reports, for example, that he suggested to Jahn two types of experiments that would have bypassed the main criticisms aimed at PEAR. Why not do a double-blind experiment? asked Park. Have a second RNG determine the task of the operator and do not let this determination be known to the one recording the results. This could have eliminated the charge of experimenter bias.98

  While there has never been an allegation of fraud in PEAR research, it was at least suspicious that fully half of their effect size was due to the trials done by a single operator over all twenty-eight years, who was presumably an employee at the PEAR lab. Perhaps that individual merely had greater psychic abilities than the other operators. One may never know, for the PEAR lab closed for good in 2007. As Jahn put it,

  It’s time for a new era, for someone else to figure out what the implications of our results are for human culture, for future study, and—if the findings are correct—what they say about our basic scientific attitude.99

  Even while one may remain skeptical of PEAR’s results, I find it cheering that the research was done in the first place and that it was taken seriously enough to critique it. While some called the lab an embarrassment to Princeton, I am not sure I can agree. The scientific attitude demands both rigor on the part of the researchers and an openness to consider scientifically produced data by the wider scientific community. Were these results scientific or were they pseudoscientific? I cannot bring myself to call them pseudoscience. I do not think that the researchers at PEAR were merely pretending to be scientists any more than those doing cold fusion or string theory. Perhaps they made mistakes in their methodology. Indeed, if it turns out to be the case that there actually is no such thing as a random number generator, perhaps the PEAR team should be credited with making this discovery! Then again, it would have been nice to see an elementary form of control, where they had let one RNG run all by itself (perhaps in another room) for twenty-eight years, with no attempt to influence it, and measure this against their experimental result. If the control machine showed 50 percent and the experimental one showed 50.00025, I would be more inclined to take the results seriously (both for showing that psychokinesis was possible and that RNGs were too).

  Conclusion

  Is it possible to think that one has the scientific attitude, but not really have it? Attitudes are funny things. Perhaps I am the only one who knows how I feel about using empirical evidence; only my private thoughts can tell me whether I am truly testing or insulating my theory. And even here I must respect the fact that there are numerous levels of self-awareness, complicated by the phenomenon of self-delusion. Yet the scientific attitude can also be measured through one’s actions. If I profess to have the scientific attitude, then refuse to consider alternative evidence or make falsifiable predictions, I can be fairly criticized, whether I feel that my intentions are pure or not. The difference that marks off denialism and pseudoscience on one side and science on the other is more than just what is in the heart of the scientist or even the group of scientists who make up the scientific community. It is also in the actions taken by the individual scientist, and the members of his or her profession, who make good on the idea that science really does care about empirical evidence. As elsewhere in life, we measure attitudes not just by thought but also by behavior.

  Notes

  1. Some may prefer the term “denier” over “denialist” for specific beliefs (e.g., vaccine denier) but the phenomenon itself is called “denialism.” Given this, I think it is clearer to call people who engage in this practice “denialists,” which has become accepted usage at least since Michael Specter’s Denialism: How Irrational Thinking Hinders Scientific Progress, Harms the Planet, and Threatens Our Lives (New York: Penguin, 2009).

  2. I will pursue examples of this later in the chapter.

  3. And it can cause great harm and suffering. Although it is much less commonly discussed than climate change, AIDS denial is a particularly pernicious example. Between 2000 and 2004, President Thabo Mbeki’s governmental policy of refusing Western antiretroviral drugs, which was informed by maverick scientist Peter Duesberg from Berkeley, California, is estimated by researchers from Harvard University to have caused 300,000 avoidable deaths in South Africa. See Sarah Boseley, “Mbeki AIDS Denial ‘Caused 300,000 Deaths,’ ” Guardian, Nov. 26, 2008, https://www.theguardian.com/world/2008/nov/26/aids-south-africa.

  4. See “Public Praises Science; Scientists Fault Public, Media,” Pew Research Center, U.S. Politics & Policy, July 9, 2009, http://www.people-press.org/2009/07/09/public-praises-science-scientists-fault-public-media/; Cary Funk and Brian Kennedy, “Public Confidence in Scientists Has Remained Stable for Decades,” Pew Research Center, April 6, 2017, http://www.pewresearch.org/fact-tank/2017/04/06/public-confidence-in-scientists-has-remained-stable-for-decades/; The National Science Foundation, “Science and Engineering Indicators 2014,” http://www.nsf.gov/statistics/seind14/index.cfm/chapter-7/c7h.htm.

  5. See Solomon Asch’s experimental work on social conformity from the 1950s (“Opinions and Social Pressure,” Scientific American 193, no. 5 [Nov. 1955]: 31–35), which shows not only that agreement solidifies belief, but that dissonance with one’s peers can cause subjects to change their beliefs to obvious falsehoods.

  6. Lee McIntyre, Respecting Truth: Willful Ignorance in the Internet Age (New York: Routledge, 2015).

  7. Noretta Koertge, “Belief Buddies versus Critical Communities,” in Philosophy of Pseudoscience, ed. M. Pigliucci and M. Boudry (Chicago: University of Chicago Press, 2013), 169.

  8. Carl Sagan, The Demon-Haunted World: Science as a Candle in the Dark (New York: Ballantine Books, 1996).

  9. Sagan, Demon-Haunted World, 304. See also his discussion on 31 and 305–306.

  10. Sagan, Demon-Haunted World, 305.

  11. Sagan, Demon-Haunted World, 305.

  12. Sagan, Demon-Haunted World, 13, 100.

  13. Sagan lists over seventy different examples of pseudoscience in his book, Demon-Haunted World, 221–222.

  14. Sagan, Demon-H
aunted World, 187. “Keeping an open mind is a virtue … but not so open your brains fall out.”

  15. To his credit, Sagan lists three claims in the field of ESP research that he thinks deserve investigation. Sagan, Demon-Haunted World, 302. One of these—the claim that thought alone can influence a random number generator—will be considered later in this chapter. But one should not conclude from this that Sagan is a pushover, for he also endorses Laplace’s insight: “Extraordinary claims require extraordinary evidence.”

  16. See http://www.csicop.org/about/csicop.

  17. Later in this chapter, I will examine in more depth the sense in which scientists are skeptical but denialists are not. It might be better to say that denialists are “selective” but, as we shall see, the criterion for selectivity in denialism (which cherry picks data for the sake of consistency with its preferred ideology) is scientifically illegitimate.

  18. Sagan, Demon-Haunted World, 304.

  19. Here I am obviously going beyond what Sagan actually said—for he did not address denialism—but I nonetheless find it useful to use this matrix as a device to consider the similarities and differences between denialism and pseudoscience.

  20. Note here the distinction between Andrew Wakefield himself, who fraudulently claimed that vaccines cause autism, versus the denialism of those who continue to believe his claim even after his fraud has been exposed.

 

‹ Prev