by Hasok Chang
In fact, this metaphor of building on a round earth has a great potential to help us make coherentism more sophisticated. It is more useful than Quine's membrane or even Neurath's boat (both mentioned in "Mutual Grounding as a Growth Strategy" in chapter 3), because those established metaphors of coherentism do not convey any sense of hierarchical structure. Although our round earth is not an
end p.223
immovable foundation, the gravitational physics of the earth still gives us a sense of direction—"up" and "down" at a given location, and "inward" and "outward" overall. That direction is a powerful constraint on our building activities, and the constraint can be a very useful one as well (consider all the difficulties of building a space station, where that constraint is absent). This constraint provides a clear overall direction of progress, which is to build up (or outward) on the basis of what is already put down (or within). There is a real sense in which elements of the inner layers support the elements of the outer layers, and not vice versa.2 Round-earth coherentism can incorporate the most valid aspects of foundationalism. It allows us to make perfect sense of hierarchical justification, without insisting that such justification should end in an unshakeable foundation or fearing that it is doomed to an infinite regress.3
Making Coherentism Progressive: Epistemic Iteration
So far I have argued that the quest for justification is bound to lead to coherentism. But the real potential of coherentism can be seen only when we take it as a philosophy of progress, rather than justification. Of course there are inherent links between progress and justification, so what I am advocating is only a change of emphasis or viewpoint, but it will have some real implications. This reorientation of coherentism was already hinted in the contrast between the two treatments of fixed points given in "The Validation of Standards" and "The Iterative Improvement of Standards" in chapter 1. The question I would like to concentrate on is how to go on in the development of scientific knowledge, not how to justify what we already have.
In the framework of coherentism, inquiry must proceed on the basis of an affirmation of some existing system of knowledge. That point has been emphasized by a wide variety of major philosophers, including Wittgenstein (1969), Husserl (1970), Polanyi (1958), and Kuhn (1970c). (As Popper conjectured, the historical beginning of this process was probably inborn expectations; the question of ultimate origin is not very important for my current purposes.) Starting from an existing system of knowledge means building on the achievements of some actual past group of intelligent beings. As Lawrence Sklar (1975, 398-400) suggested tentatively (in a curiously foundationalist metaphor), a "principle of conservatism" may
2. But it is possible occasionally to drill underneath to alter the structure below, as long as we do not do it in such a way as to make the platform above collapse altogether. See Hempel 1966, 96.
3. One interesting question that remains is how often scientific inquiry might be able to get away with pretending that certain assumptions are indubitable truths. Recall Duhem's view (discussed in "Measurement, Circularity, and Coherentism") that the physiologist does not need to worry about the correctness of the principles of physics that underwrite the correct functioning of his measuring instruments. Although foundationalism may not work as a general epistemology, there are scientific situations that are in effect foundationalist. To return to the metaphor of buildings, it is quite true that most ordinary building work is done as if the earth were flat and firmly fixed. It is important to discern the types of situations in which the breakdown of foundationalism does or does not affect scientific research in a significant way.
end p.224
be "a foundation stone upon which all justification is built." This gives knowledge an indelibly historical character. The following analogy, used by Harold Sharlin (1979, 1) to frame his discussion of William Thomson's work, has general significance: The father-son relationship has an element that is analogous to the historical basis of scientific research. The son has his father to contend with, and he rejects him at his peril. The scientific tradition may obstruct modern science, but to deny that tradition entirely is to undermine the basis for scientific investigations. For the son and a new generation of scientists, there are two courses open: submit to the past and be a duplicate hemmed in by the lessons of someone else's experience, or escape. Those who seek to escape the past without doing violence to the historical relationship between the present and the past are able to maintain their independence and make original contributions.
I summarized a similar insight in the "principle of respect" in "The Validation of Standards" in chapter 1. It is stronger than what William G. Lycan (1988, 165-167, *175-176) calls the "principle of credulity," which only says that a belief one holds initially should not be rejected without a reason, but it should be rejected whenever there is a reason, however insignificant. The principle of respect does not let the innovator off so easily. Those who respect the affirmed system may have quite strong reasons for rejecting it, but will continue to work with it because they recognize that it embodies considerable achievement that may be very difficult to match if one starts from another basis.
The initial affirmation of an existing system of knowledge may be made uncritically, but it can also be made while entertaining a reasonable suspicion that the affirmed system of knowledge is imperfect. The affirmation of a known system is the only option when there is no alternative that is clearly superior. A simple example illustrates this point. Fahrenheit made some important early experimental contributions to the study of specific heats, by mixing measured-out amounts of fluids at different initial temperatures and observing the temperature of the resulting mixture. In these experiments he was clearly aware of an important source of error: the initial temperature of the mixing vessel (and the thermometer itself) would have an effect on the outcome. The only way to eliminate this source of error was to make sure that the mixing vessel started out at the temperature of the resulting mixture, but that temperature was just what the experiment was trying to find out. The solution adopted by Fahrenheit was both pragmatic and profound at once. In a letter of 12 December 1718 to Boerhaave, he wrote: (1) I used wide vessels which were made of the thinnest glass I could get. (2) I saw to it that these vessels were heated to approximately the same temperature as that which the liquids assumed when they were poured into them. (3) I had learned this approximate temperature from some tests performed in advance, and found that, if the vessel were not so approximately heated, it communicated some of its own temperature (warmer or colder) to the mixture. (van der Star 1983, 80-81)
I have not been able to find a record of the exact procedure of approximation that Fahrenheit used. However, the following reconstruction would be a possibility, and would be quite usable independently of whether Fahrenheit used it himself. Start
end p.225
with the vessel at the halfway temperature between the initial temperatures of the hot and the cold liquids. Measure the temperature of the mixture in that experiment, and then set the vessel at that temperature for the next experiment, whose outcome will be slightly different from the first. This procedure could be repeated as many times as desired, to reduce the error arising from the initial vessel temperature as much as we want. In the end the initial vessel temperature we set will be nearly identical to the temperature of the mixture. In this series of experiments, we knowingly start with an ill-founded guess for the outcome, but that guess serves as a starting point from which a very accurate result can be reached.
This is an instance of what I have named "epistemic iteration" in "The Iterative Improvement of Standards" in chapter 1, which I characterized as follows: "Epistemic iteration is a process in which successive stages of knowledge, each building on the preceding one, are created in order to enhance the achievement of certain epistemic goals. … In each step, the later stage is based on the earlier stage, but cannot be deduced from it in any straightforward sense. Each link is based on the principle of respect and the imperative
of progress, and the whole chain exhibits innovative progress within a continuous tradition." Iteration provides a key to understanding how knowledge can improve without the aid of an indubitable foundation. What we have is a process in which we throw very imperfect ingredients together and manufacture something just a bit less imperfect. Various scientists and philosophers have noted the wonderful, almost-too-good-to-be-true nature of this process, and tried to understand how it works. I have already mentioned Peirce's idea about the self-correcting character of knowledge in "The Iterative Improvement of Standards" in chapter 1. George Smith (2002, 46) argues convincingly that an iterative engagement with empirical complexities was what made Newton's system superior to his competitors': "In contrast to the rational mechanics of Galileo and Huygens, the science coming out of the Principia tries to come to grips with actual motions in all their complexity—not through a single exact solution, however, but through a sequence of successive approximations." Among contemporary philosophers, it is perhaps Deborah Mayo (1996) who has made the most extensive efforts to explain the nature of self-correction and "ampliative inference."4
Of course, there is no guarantee that the method of epistemic iteration will always succeed. A danger inherent in the iterative process is the risk of self-destruction (cf. Smith 2002, 52). Since the initially affirmed system is subject to modification, there is a possibility that the validity of the inquiry itself will be jeopardized. How can it be justifiable to change elements of the initially affirmed system, which is the very basis of our attempt at making progress? It is not that there is any problem about changing one's mind in science. The concern is that the whole process might become a morass of self-contradiction. What we need to ensure is that the changes in the initially affirmed system do not invalidate the very outcomes that prompted the changes. Whether that is possible is a contingent empirical
4. For a quick introduction to Mayo's ideas, see my review of her major work (Chang 1997).
end p.226
question for each case. If all attempted iterative improvements to a system result in self-contradiction, that may be taken as a failure of the system itself. Such repeated self-destruction is as close as we can get to an empirical falsification of the initially affirmed system. In earlier chapters we have had various glimpses of such rejection of initially affirmed beliefs: for instance, the essential fluidity of mercury ("Can Mercury be Frozen?" and "Can Mercury Tell Us Its Own Freezing Point?" in chapter 3), Irvine's doctrine of heat capacity ("Theoretical Temperature before Thermodynamics" in chapter 4), and the linearity of the expansion of alcohol (end of "Regnault: Austerity and Comparability" in chapter 2). If a system of knowledge is judged to be unable to support any progressive inquiry, that is a damning verdict against it.
When iteration is successful, how do we judge the degree of progress achieved by it? Here one might feel a hankering back to foundationalism, in which self-justifying propositions can serve as sure arbiters of truth; then we may have a clear sense in which scientific progress can be evaluated, according to how closely we have approached the truth (or at least how well we have avoided falsity). Without an indubitable foundation, how will we be able to judge whether we have got any closer to the truth? What we need to do here is look away from truth. As even the strictest foundationalists would admit, there are a variety of criteria that we can and should use in judging the merits of systems of knowledge. These criteria are less than absolute, and their application is historically contingent to a degree, but they have considerable force in directing our judgments.
In Carl Hempel's (1966, 33-46) discussion of the criteria of empirical confirmation and the acceptability of hypotheses in the framework of hypothetico-deductivism, many aspects that go beyond simple agreement between theory and observation are recognized as important factors. First of all Hempel stresses that the quality of theory-observation agreement has to be judged on three different criteria: the quantity, variety, and precision of evidence. In addition, he gives the following as criteria of plausibility: simplicity, support by more general theories, ability to predict previously unknown phenomena, and credibility relative to background knowledge. Thomas Kuhn (1977, 322) lists accuracy, consistency, scope, simplicity, and fruitfulness as the "values" or "standard criteria for evaluating the adequacy of a theory," which allow the comparative judgment between competing paradigms despite incommensurability. Bas van Fraassen (1980, 87) mentions elegance, simplicity, completeness, unifying power, and explanatory power only to downgrade these desirables as mere "pragmatic virtues," but even he would not suggest that they are without value, and others argue that these pragmatic virtues can be justificatory. William Lycan (1988; 1998, 341) gives the following as examples of epistemic or "theoretical" virtues: simplicity, testability, fertility, neatness, conservativeness, and generality (or explanatory power). I will refer to all of these various criteria of judgment as "epistemic values" or "epistemic virtues," using the terms somewhat interchangeably. (The same list of things will be referred to as epistemic values when they are taken as criteria by which we make judgments on systems of knowledge, and as epistemic virtues when they are taken as good qualities possessed by the systems of knowledge.)
Whether and to what extent an iterative procedure has resulted in progress can be judged by seeing whether the system of knowledge has improved in any of its
end p.227
epistemic virtues. Here I am defining progress in a pluralistic way: the enhancement of any feature that is generally recognized as an epistemic virtue constitutes progress. This notion of progress might be regarded as inadequate and overly permissive, but I think it is actually specific enough to serve most purposes in philosophy of science, or in science itself. I will quickly dispense with a few major worries. (1) A common normative discourse will not be possible if people do not agree on the list of recognized epistemic values, but there is actually a surprising degree of consensus about the desirability of those epistemic values mentioned earlier. (2) Even with an agreement on the values there will be situations in which no unequivocal judgment of progress can be reached, in which some epistemic values are enhanced and others are diminished. However, in such situations there is no reason why we should expect to have an unequivocal verdict. Some have attempted to raise one epistemic virtue (or one set of epistemic virtues) as the most important one to override all others (for example, van Fraassen's empirical adequacy, Kuhn's problem-solving ability, and Lakatos's novel predictions), but no consensus has been reached on such an unambiguous hierarchies of virtues.5 (3) One might argue that the only virtue we can and should regard as supreme over all others is truth, and that I have gone astray by setting truth aside in the first place when I entered into the discussion of epistemic virtues. I follow Lycan (1988, 154-156) in insisting that the epistemic virtues are valuable in their own right, regardless of whether they lead us to the truth. Even if truth is the ultimate aim of scientific activity, it cannot serve as a usable criterion of judgment. If scientific progress is something we actually want to be able to assess, it cannot mean closer approach to the truth. None of the various attempts to find usable watered-down truthlike notions (approximate truth, verisimilitude, etc.) have been able to command a consensus.6
Fruits of Iteration: Enrichment and Self-Correction
Let us now take a closer look at the character of scientific progress that can be achieved by the method of epistemic iteration. There are two modes of progress enabled by iteration: enrichment, in which the initially affirmed system is not negated but refined, resulting in the enhancement of some of its epistemic virtues; and self-correction, in which the initially affirmed system is actually altered in its content as a result of inquiry based on itself. Enrichment and self-correction often occur simultaneously in one iterative process, but it is useful to consider them separately to begin with.
5. See Kuhn 1970c, 169-170 and 205; Van Fraassen 1980, 12 and ch. 3; Lakatos 1968-69 and Lakatos 1970, or Lakatos [1973] 1977 for a very quick introduction.
/> 6. Psillos (1999, ch. 11) gives a useful summary and critique of the major attempts in this direction, including those by Popper, Oddie, Niiniluoto, Aronson, Harré and Way, and Giere. See also Psillos's own notion of "truth-likeness" (pp. 276-279), and Boyd 1990, on the notion of approximate truth and its difficulties.
end p.228
Enrichment