Theory and Reality

Home > Other > Theory and Reality > Page 7
Theory and Reality Page 7

by Peter Godfrey-Smith


  Let us look more closely at what the logical empiricists tried to do. First, I should say more about the distinction between deductive and inductive logic (a distinction introduced in chapter z). Deductive logic is the wellunderstood and less controversial kind of logic. It is a theory of patterns of argument that transmit truth with certainty. These arguments have the feature that if the premises of the argument are true, the conclusion is guaranteed to be true. An argument of this kind is deductively valid. The most famous example of a logical argument is a deductively valid argument:

  A deductively valid argument might have false premises. In that case the conclusion might be false as well (although it also might not be). What you get out of a deductive argument depends on what you put in.

  The logical empiricists loved deductive logic, but they realized that it could not serve as a complete analysis of evidence and argument in science. Scientific theories do have to be logically consistent, but this is not the whole story. Many inferences in science are not deductively valid and give no guarantee. But they still can be good inferences; they can still provide support for their conclusions.

  For the logical empiricists, there is a reason why so much inference in science is not deductive. As empiricists, they believed that all our evidence derives from observation. Observations are always of particular objects and occurrences. But the logical empiricists thought that the great aim of science is to discover and establish generalizations. Sometimes the aim was seen as describing "laws of nature," but this concept was also regarded with some suspicion. The key idea was that science aims at formulating and testing generalizations, and these generalizations were seen as having an infinite range of application. No finite number of observations can conclusively establish a generalization of this kind, so these inferences from observations in support of generalizations are always nondeductive. (In contrast, all it takes is one case of the right kind to prove a generalization to be false; this fact will loom large in the next chapter.)

  In many discussions of these topics, the logical empiricists (and some later writers) used a simple terminology in which all arguments are either deductive or inductive. Inductive logic was thought of as a theory of all good arguments that are not deductive. Carnap, especially, used "induction" in a very broad way. But this terminology can be misleading, and I will set things up differently.

  I will use the term "induction" only for inferences from particular observations in support of generalizations. To use the most traditional example, the observation of a large number of white swans (and no swans of any other color) might be used to support the hypothesis that all swans are white. We could express the premises with a list of particular cases"Swan i observed at time t, was white; swan z observed at time t2 was white...." Or we might simply say: "All the many swans observed so far have been white." The conclusion will be the claim that all swans are white-a conclusion that could well be false but which is supported, to some extent, by the evidence. Sometimes "enumerative induction" or "simple induction" is used for inductive arguments of this most traditional and familiar kind. Not all inferences from observations to generalizations have this very simple form, though. (And a note to mathematicians: mathematical induction is really a kind of deduction, even though it has the superficial form of induction.)

  A form of inference closely related to induction is projection. In a projection, we infer from a number of observed cases to arrive at a prediction about the next case, not to a generalization about all cases. So we see a number of white swans and infer that the next swan will be white. Obviously there is a close relationship between induction and projection, but (surprisingly, perhaps) there are a variety of ways of understanding this relationship.

  Clearly there are other kinds of nondeductive inference in science and everyday life. For example, during the z98os Luis and Walter Alvarez began claiming that a huge meteor had hit the earth about 65 million years ago, causing a massive explosion and dramatic weather changes that coincided with the extinction of the dinosaurs (Alvarez et al. z98o). The Alvarez team claimed that the meteor caused the extinctions, but let's leave that aside here. Consider just the hypothesis that a huge meteor hit the earth 65 million years ago. A key piece of evidence for this hypothesis is the presence of unusually high levels of some rare chemical elements, such as iridium, in layers in the earth's crust that are about 65 million years old. These chemical elements tend to be found in meteors in much higher concentrations than they are near the surface of the earth. This observation is taken to be strong evidence supporting the Alvarez theory that a meteor hit the earth around that time.

  If we set this case up as an argument, with premises and a conclusion, it clearly is not an induction or a projection. We are not inferring to a generalization, but to a hypothesis about a structure or process that would explain the data. A variety of terms are used in philosophy for inferences of this kind. C. S. Peirce called these "abductive" inferences as opposed to inductive ones. Others have called them "explanatory inductions;' "theoretical inductions," or "theoretical inferences." More recently, many philosophers have used the term "inference to the best explanation" (Harman 1965; Lipton i99i). I will use a slightly different term-"explanatory inference."

  So I will recognize two main kinds of nondeductive inference, induction and explanatory inference (plus projection, which is closely linked to induction). The problem of analyzing confirmation, or the problem of analyzing evidence, includes all of these.

  How are these kinds of inference related to each other? For logical positivism and logical empiricism, induction is the most fundamental kind of nondeductive inference. Reichenbach claimed that all nondeductive inference in science can be reconstructed in a way that depends only on a form of inference that is close to traditional induction. What looks like an explanatory inference can be somehow broken down and reconstructed as a complicated network of inductions and deductions. Carnap did not make this strong claim, but he did seem to view induction as a model for all other kinds of nondeductive inference. Understanding induction was in some sense the key to the whole problem. And the majority of the logical empiricist literature on these topics was focused on induction rather than explanatory inference.

  So one way to view the situation is to see induction as fundamental. But it is also possible to do the opposite, to claim that explanatory inference is fundamental. Gilbert Harman argued in 11965 that inductions are justified only when they are explanatory inferences in disguise, and others have followed up this idea in various ways.

  Explanatory inference seems much more common than induction within actual science. In fact, you might be wondering whether science contains any inductions of the simple, traditional kind. That suspicion is reasonable, but it might go too far. Science does contain inferences that look like traditional inductions, at least on the face of them. Here is one example. During the work that led to the discovery of the structure of DNA by James Watson and Francis Crick, a key piece of evidence was provided by "Chargaff's rules." These "rules," described by Erwin Chargaff in 1947, have to do with the relation between the amounts of the four "bases," C, A, T, and G, that help make up DNA. Chargaff found that in the DNA samples he analyzed, the amounts of C and G were always roughly the same, and the amounts of T and A were always roughly the same. This fact about DNA became important in the discussions of how DNA molecules are put together. I called it a "fact" just above, but of course Chargaff in 1947 had not observed all the molecules of DNA that exist, and neither have we. In 1947 Chargaff's claim rested on an induction from a small number of cases in just eight different kinds of organisms). Today we can give an argument for why Chargaff's rules hold that is not just a simple induction; the structure of DNA explains why Chargaff's rules must hold. But it might appear that, back when the rules were originally discovered, the only reason to take the rules to describe all DNA was inductive.

  So it might be a good idea to refuse to treat one of these kinds of inference as "more fundamental" than the oth
er. Maybe there is more than one kind of good nondeductive inference (and perhaps there are others besides the ones I have mentioned). Philosophers often find it attractive to think that there is ultimately just one kind of nondeductive inference, because that seems to be a simpler situation. But the argument from simplicity is unconvincing.

  Let us return to our discussion of how the problem was handled by the logical empiricists. They used two main approaches. One was to formulate an inductive logic that looked as much as possible like deductive logic, borrowing ideas from deductive logic whenever possible. That was Carl Hempel's approach. The other approach, used by Rudolf Carnap, was to apply the mathematical theory of probability. In the next two sections of this chapter, I will discuss some famous problems for logical empiricist theories of confirmation. The problems are especially easy to discuss in the context of Hempel's approach, which was simpler than Carnap's. A detailed examination of Carnap is beyond the scope of this book. Through his career, Carnap developed very sophisticated models of confirmation using probability theory applied to artificial languages. Problems kept arising. More and more special assumptions were needed to make the results come out right. There was never a knockdown argument against him, but the project came to seem less and less relevant to real science, and it eventually ran out of steam (Howson and Urbach 1993).

  Although Carnap's approach to analyzing confirmation did not work out, the idea of using probability theory to understand confirmation remains popular and has been developed in new ways. Certainly this looks like a good approach; it does seem that observing the raised iridium level in the earth's crust made the Alvarez meteor hypothesis more probable than before. In chapter 14 I will describe new ways to use probability theory to understand the confirmation of theories.

  Before moving on to some famous puzzles, I will discuss a simple proposal that may have occurred to you.

  The term hypotheticodeductivism is used in several ways by people writing about science. Sometimes it is used to describe a simple view about testing and confirmation. According to this view, hypotheses in science are confirmed when their logical consequences turn out to be true. This idea covers a variety of cases; the confirmation of a white-swan generalization by observing white swans is one case, and another is the confirmation of a hypothesis about an asteroid impact by observations of the true consequences of this hypothesis.

  As Clark Glymour has emphasized (z98o), an interesting thing about this idea is that it is hopeless when expressed in a simple way, but something like it seems to fit well with many episodes in the history of science. One problem is that a scientific hypothesis will only have consequences of a testable kind when it is combined with other assumptions, as we have seen. But put that problem aside for a moment. The suggestion above is that a theory is confirmed when a true statement about observables can be derived from it. This claim is vulnerable to many objections. For example, any theory T deductively implies T-or-S, where S is any sentence at all. But T-or-S can be conclusively established by observing the truth of S. Suppose S is observational. Then we can establish T-or-S by observation, and that confirms T. This is obviously absurd. Similarly, if theory T implies observation E, then the theory T& S implies E as well. So T& S is confirmed by E, and S here could be anything at all. (Note the similarity here to a problem discussed at the beginning of section z.4.) There are many more cases like this.

  The situation is strange, and some readers may feel exasperation at this point. People do often regard a scientific hypothesis as supported when its consequences turn out to be true; this is taken to be a routine and reasonable part of science. But when we try to summarize this idea using simple logic, it seems to fall apart. Does the fault lie with the original idea, with our summary of the idea using basic logic, or with basic logic itself? The logical empiricist response was to hang steadfastly onto the logic, and often to hang onto their translations of ideas about science into a logical framework as well. This led them to question or modify some very reasonablelooking ideas about evidence and testing. But it is hard to work out where the fault really lies.

  A related feature of logical empiricism is the use of simplified and artificial cases rather than cases from real science. The logical empiricists sought to strip the problem of confirmation down to its bare essentials, and they saw these essentials in formal logic. But to many, philosophy of science seemed to be turning into an exercise in "logic-chopping" for its own sake. And as we will see in the next sections, even the logic-chopping did not go well.

  Despite this, there is a lot to learn from the problems faced by logical empiricism. Confirmation really is a puzzling thing. Let us look at some famous puzzles.

  3.3 The Ravens Problem

  The logical empiricists put much work into analyzing the confirmation of generalizations by observations of their instances. At this point we will switch birds, in accordance with tradition. How is it that repeated observations of black ravens can confirm the generalization that all ravens are black?

  First I will deal with a simple suggestion that will not work. Some readers might be thinking that if we observe a large number of black ravens and no nonblack ones, then at least we are cutting down the number of ways in which the hypothesis that all ravens are black might be wrong. As we see each raven, there is one less raven that might fail to fit the theory. So in some sense, the chance that the hypothesis is true should be slowly increasing. But this does not help much. First, the logical empiricists were concerned to deal with the case where generalizations cover an infinite number of instances. In that case, as we see each raven we are not reducing the number of ways in which the hypothesis might fail. Also, note that even if we forget this problem and consider a generalization covering just a finite number of cases, the kind of support that is analyzed here is a very weak one. That is clear from the fact that we get no help with the problem of projection. As we see each raven we know there is one less way for the generalization to be false, but this does not tell us anything about what to expect with the next raven we see.

  So let us look at the problem differently. Hempel suggested that, as a matter of logic, all observations of black ravens confirm the generalization that all ravens are black. More generally, any observation of an F that is also G supports the generalization "All F's are G." He saw this as a basic fact about the logic of support.

  This looks like a reasonable place to start. And here is another obviouslooking point: any evidence that confirms a hypothesis H also confirms any hypothesis that is logically equivalent to H.

  What is logical equivalence? Think of it as what we have when two sentences say the same thing in different terms. More precisely, if H is logically equivalent to H* , then it is impossible for H to be true but H* false, or vice versa.

  But these two innocent-looking claims generate a problem. In basic logic the hypothesis "All ravens are black" is logically equivalent to "All nonblack things are not ravens." Let us look at this new generalization. "All nonblack things are not ravens" seems to be confirmed by the observation of a white shoe. The shoe is not black, and it's not a raven, so it fits the hypothesis. But given the logical equivalence of the two hypotheses, anything that confirms one confirms the other. So the observation of a white shoe confirms the hypothesis that all ravens are black! That sounds ridiculous. As Nelson Goodman (1955) put it, we seem to have the chance to do a lot of "indoor ornithology"; we can investigate the color of ravens without ever going outside to look at one.

  This simple-looking problem is hard to solve. Debate about it continues. Hempel himself was well aware of this problem-he is the one who originally thought of it. But there has not been a solution proposed that everyone (or even most people) have agreed upon.

  One possible reaction is to accept the conclusion. This was Hempel's response. Observing a white shoe does confirm the hypothesis that all ravens are black, though presumably only by a tiny amount. Then we can keep our simple rule that whenever we have an "All F's are G" hypothesis, any observation
of an F that is G confirms it and also confirms everything logically equivalent to "All F's are G." Hempel stressed that, logically speaking, an "All F's are G" statement is not a statement about F's but a statement about everything in the universe-the statement that if something is an F then it is G. We should note that according to this reply, the observation of the white shoe also confirms the hypothesis that all ravens are green, that all aardvarks are blue, and so on. Hempel was comfortable with this situation, but most others have not been.

  A multitude of other solutions have been proposed. I will discuss just two ideas, which I regard as being on the right track.

  Here is the first idea. Perhaps observing a white shoe or a black raven may or may not confirm "All ravens are black." It depends on other factors. Suppose we know, for some reason, that either (1) all ravens are black and ravens are extremely rare, or else (z) most ravens are black, a few are white, and ravens are common. Then a casual observation of a black raven will support (z), a hypothesis that says that not all ravens are black. If all ravens were black, we should not be seeing them at all. Observing a white shoe, similarly, may or may not confirm a given hypothesis, depending on what else we know. This reply was first suggested by I. J. Good (1967).

  Good's move is very reasonable. We see here a connection to the issue of holism about testing, discussed in chapter z. The relevance of an observation to a hypothesis is not a simple matter of the content of the two statements; it depends on other assumptions as well. This is so even in the simple case of a hypothesis like "All F's are G" and an observation like "Object A is both F and G." Good's point also reminds us how artificially simplified the standard logical empiricist examples are. No biologist would seriously wonder whether seeing thousands of black ravens makes it likely that all ravens are black. Our knowledge of genetics and bird coloration leads us to expect some variation, such as cases of albinism, even when we have seen thousands of black ravens and no other colors.

 

‹ Prev