Book Read Free

The World Philosophy Made

Page 21

by Scott Soames


  Contra Kripke, we don’t identify heat by first perceiving a sensation S, and then taking heat to be the know-not-what that causes S. The sensation is our perception of heat, just as a visual experience of my dog Lilly is a perception her. Lilly does cause my visual experience, but when I identify her I do so directly, not by making my visual experience the object of my attention, and defining her as its cause. If I ask myself, “To what do I use ‘Lilly’ to refer?” I look at her and say, “To her.” If I ask myself, “To what do I use ‘heat’ to refer?,” I move close to the fire, or the stove, and say, “To that.” Since there is no “reference-fixing description,” I don’t take either term to be synonymous with a description. Nor do I confuse conceivable scenarios involving Lilly, or heat, with scenarios in which other things cause my experiences.

  When I say I can conceive of heat not being molecular motion, or of Lilly not being an animal, I am not misdescribing some other possibility that I am really conceiving. I am not really thinking of sensation S being caused by something other than heat, or of my Lilly-perceptions being caused by a mechanical robot. I am simply thinking of heat, or Lilly, as lacking an essential property P. Because P is essential, the claim that x has P, if x exists, is necessary. Because I can’t know, without empirical evidence, that x has P, knowledge of the necessary truth requires such evidence to rule out conceivable (but not genuinely possible) disconfirming scenarios that can’t be eliminated in any other way.

  The same is true of self-predications. Let P be a property I couldn’t have existed without having—e.g., having a body made up of molecules, or being a human being—but which I can’t know I do have without empirical evidence. My remark “If I exist, then I have P” will then express a necessary truth. Although this truth might wrongly seem contingent, the reason it does isn’t that I wrongly take my use of ‘I’ to be synonymous with a reference-fixing description. There is no such description. When I use the pronoun, I don’t identify myself as the creature, whoever it might be, designated by a privileged description. Thus, when I say that I am conceiving a scenario in which I lack P, I am not confusing myself with someone else, Mistaken-Me, who, in fact, is designated by my reference-fixing description—thereby misdescribing a different possibility in which he lacks P.

  The lesson is the same in all genuine Kripke-cases—e.g., heat and mean molecular kinetic energy, Lilly and being an animal, and me and being human. In each case, the mistake of wrongly taking a proposition to be contingent that, in fact, must be necessary, if true, is due to the fact that establishing its truth requires empirical evidence ruling out scenarios in which it is false. It was surprising that these empirical discoveries turned out to be necessary truths. What was surprising was that the reason empirical evidence is needed to establish these statements isn’t to rule out disconfirming possibilities; it is to rule out disconfirming impossibilities we can’t know not to be actual by reason alone.

  This is the core insight behind Kripke’s groundbreaking distinction between genuine possibility and mere conceivability. Thus, if the psychophysical identity statement (7) were really a necessary truth, on par with (8a), the alleged illusion that it is contingent, if true, could be explained in the same way that the genuine illusion that (8a) is necessary, if true, is explained. This is enough to undermine Kripke’s argument against the psychophysical identity theory. However, we are not finished. There is still an important question to be resolved. Must (7) really be necessary, if it is true?

  THE ROLE OF NECESSITY AND POSSIBILITY

  Up to now we have been reasoning as if (7) must be necessary, if true. But in so doing we have been ignoring the idea that our concept of pain is that of something that plays a certain functional role—of (i) perceiving certain kinds of bodily injury, (ii) triggering changes in an organism’s current motivational structure that lead to actions intended to end or minimize the current injury, and (iii) forming or reinforcing desires and intentions to avoid similar injuries in the future. To see why this kind of functional characterization of pain makes a difference, imagine that scientific investigation had given us grounds for believing that every pain of each and every contemporary human being x is a stimulation of x’s C-fibers, and every such stimulation is one of x’s pains. We would then have reason to believe that (9) is true.

  9.  For all y, y is a human pain if and only if y is a stimulation of C-fibers.

  But we wouldn’t, thereby, have reason to believe that (9) was necessary. Perhaps earlier in our evolutionary history human pains were stimulations of a more primitive kind of neurological material, which then played the pain role that the stimulation of C-fibers plays today. If that were so, then (9) would be, at best, a contingent truth. The same conclusion would follow if there was reason to believe that under future evolutionary pressure D-fibers replaced C-fibers. In fact all that is needed to refute the claim that (9) must be necessary, if true, is the mere possibility that human evolution could have gone, or will go, differently enough to bring it about that something other than C-fiber stimulation plays the pain role in human beings.

  In short, facts of the sort we have been considering about what is, and what is not, necessary, do not provide conclusive arguments against the claim that certain “mental” phenomena—pains in human beings—are identical with, and hence nothing more than, neurological events that play certain functional roles. A slight variation on these considerations would allow the continued existence of human C-fibers, even though stimulation of them wouldn’t play the pain role, because, at the relevant possible state, w, of the world, C-fibers interact with other neural systems in humans not present in human brains in the actual state of the world here and now. Hence, for all we know, there may be genuinely possible states w at which particular C-fiber stimulations that are pains, because they now play the pain role, exist without being pains at w—in which case, being a human pain is not, contrary to Kripke, an essential property of any particular human pain. These “possibilities” are, of course, speculative. But since nothing in Naming and Necessity tells against them, it is fair to conclude that Kripke’s objections to the version of mind-body identity thesis expressed by (9) don’t succeed.

  WHERE WE STAND

  The discussion here has been favorable to the idea that not only pains, but also mental states and processes generally, might be physical states and processes that play functional roles in the lives of sentient agents. That had better be so, if the defense of the psychophysical story of pain in human beings sketched earlier is on the right track. However, defeating the most powerful contemporary philosophical objections to identifying human pains with neurological events doesn’t establish the correctness of that identification. Having made free use of other mental concepts—perception, belief, desire, intention, and motivation—our story is hostage to the soundness of similar stories of these other mental states. Are C-fiber stimulations really perceptions, or are they merely physical events that accompany genuinely mental perceptions? It is worth noting that we apply these concepts—perception, belief, desire, and the like—only to conscious living things. I can’t, for example, imagine any advanced robot constructed along the same lines as those produced by current technology doing more than simulating mental phenomena. Why? What is missing? Until we have convincing answers to these questions, we shouldn’t be too quick to assert that the answers must be purely physicalist. Although we may have made some progress, we still haven’t gotten to the bottom of the mind-body problem.

  COMPUTATION, COGNITIVE PSYCHOLOGY, AND THE REPRESENTATIONAL THEORY OF MIND

  The forgoing discussion tracks the transformation of traditional approaches of the mind-body problem into the scientific framework of post-behaviorist psychology. By the early 1960s, academic psychology was emerging from an earlier era dominated by the search for laws connecting environmental stimulus with behavioral response. Since events in the mind or brain had been taken to be beyond the range of scientific observation, postulation of internal causes and effects had been
deemed unscientific, and learning had often been understood to be the acquisition of conditioned behavior shaped by rewards. In the emerging new paradigm, minds were coming to be seen as biologically based computers, understood on the model of Turing machines, with functional architectures of connected subsystems processing information, passing their outputs to the next subsystem, and generating a sequence of internal causes and effects mediating sensory input and behavioral output.

  In the new framework, the mind is taken to perform computational operations on internal elements—e.g., maps, sketches, or sequences of other symbols—representing objects in the world, their properties, and possible states of affairs relating them to one another. Different cognitive systems—e.g., perception, memory, conscious reasoning—are, to varying degrees, able to interact, contributing informational output to, and receiving input from, other systems. Crucially, the computations connecting informational inputs and outputs don’t require the unexplained intelligence of an internal interpreter. Rather, they explain the intelligence of the agent, whose cognitive architecture is a coordinated system of information-processing and action-generating subsystems. Just as Turing machines can—by performing a sequence of tiny tasks no one of which requires intelligence—solve all problems that can be solved using any intelligent method, so the intelligent mind is thought to be able to do the same sort of thing. Thus, it is hoped, intelligence will be explained, rather than presupposed.

  At this point, three objections are likely to be heard. First, while it is all well and good to seek internal explanations of human thought and action, surely, one may assume, the internal causes we seek must be neurological brain processes; there is no cognitive science without neuroscience. To which we respond, “Yes and No.” Of course, neurological processes play indispensable causal roles in enabling and shaping the actions we perform, and the thoughts we have. But the actions and thoughts for which we typically seek psychological explanations are those that relate us to realities outside our own skins. How do we acquire, store, and retrieve information about our environment, other people, other places, and other things? How do we learn to recognize people and things that are important to us, to understand and sometimes influence events that affect us, and to shape our actions to achieve our ends? Since the essential function of mind is to relate the individual to the world, the most fundamental question for cognitive psychology is How does mind manage to represent the world and to devise ways of changing or adapting to it? Although neuroscience has a role in answering this question, it is a supplement to, rather than replacement for, cognitive psychology.

  The second natural objection to the search for internal computational/representational explanations of human cognitive abilities is that it is not obvious where, or how, consciousness fits into the picture. To this it must be conceded that consciousness does remain a mystery. Nevertheless, we shouldn’t prejudge how much of our cognitive capacity is due to subconscious representation and computation, and how much isn’t. So far the answer seems, from studies of, e.g., the perception of objects, faces, and speech, as well as the use and acquisition of language, “Quite a lot.”

  The third worry is that there is no way to observe or identify unconscious cognitive processes, and so no way to study them scientifically. But, that’s not quite right. To take one easily imagined example, we know from the study of proof procedures in logic that it is possible to construct computationally very different procedures all of which recognize the same class of logical consequences. There are systems that economize on axioms, those that economize on rules of inference, those that economize on both, and those that economize on neither. There are even systems that simulate the generation of models that would make the premises true and the purported conclusion false, which when that turns out to be impossible—because the conclusion does logically follow from the premises—always terminate and tell us that. The fact that these systems are computationally very different means that, although they all eventually draw the same conclusions, some proofs that are short, quick, and easy in some systems are long, time-consuming, and laborious in others.

  With this in mind, suppose we are confronted with a black box programmed to use one of the systems in drawing inferences. Even if we aren’t given the program and can’t look inside the box, we can measure the time it takes to reach various conclusions, thereby supporting some hypotheses, and eliminating others, about its internal computational routines. If, in addition, we are given a little information about the programming, and the internal structure of the machine, we may even be able to identify those routines. Think of this as cognitive science for a black box.

  In cognitive psychology the subjects aren’t opaque black boxes. Human biology, genetics, and neuroscience help guide psychologists in devising potentially informative tests in which subjects draw conclusions from information they are given. Imagine a test that asks them what statements follow logically from which others (explaining to them what we mean by this). By noting how long it takes them to decide; when they are right, when they are wrong, when they reach no conclusion; what factors interfere with their decisions; and when in their cognitive development they acquired the tested ability, psychologists can obtain evidence about the unconscious cognitive processes involved. One of the most interesting psychological theories of inference along these lines is presented by Philip Johnson-Laird, in Mental Models (1983), which supplements a wide-ranging 1977 state-of-the-art collection, Thinking: Readings in Cognitive Science, edited by Johnson-Laird and P. C. Watson.

  The philosopher who has done more in the last 50 years than anyone else to initiate, conceptualize, systematize, and advance the representational conception of mind, and how to study it scientifically is Jerry A. Fodor (1935–). The following three of his many books provide a good introduction to his thought: Psychological Explanation, An Introduction to the Philosophy of Psychology (1968), The Psychology of Language (coauthored with T. G. Bever and M. F. Garrett, 1974), and Representations (1981). Here I will say a word about one of the articles, “Propositional Attitudes,” that appears in Representations.

  The article focuses on the “attitudes” belief and desire, in order to illustrate a paradigm for explaining behavior in cognitive psychology. Like nearly everyone else, Fodor assumes that much of our behavior results from our beliefs and desires. As he puts it, “John believes that it will rain if he washes his car. John wants it to rain. So John acts in a way intended to be car washing.”9 Here, the action John performs is attributed to a pair of internal cognitive causes—a belief and a desire, acting in concert. Many of the explanations Fodor seeks fit this picture, and so can be expressed along the lines of (10).

  10.   X performed action A because (i) X desired it to be the case that S, (ii) X believed that performing A would bring it about that S, and (iii) X believed X could perform A.

  In order to provide interesting, scientifically grounded explanations of this sort, cognitive psychology must tell us what belief and desire are.

  Like most philosophers, Fodor takes the verb ‘believe’ to stand for a relation holding between believers, like you and me, and things believed, like the proposition that the earth is round, the latter designated by the clause following ‘believes’ in the sentence ‘John believes that the earth is round’. Since this proposition represents the earth as having a shape it really does have, it is true. Although it has long been a mystery what exactly propositions are, a new conception of them as representational cognitive acts or operations (sketched in chapter 7) provides something that may be of use to Fodorian cognitive psychology.

  The best statement of his chief thesis in “Propositional Attitudes” is (roughly) that an agent A believes a proposition P (at time) if and only if P is the content of (i.e., the proposition expressed by) a mental representation M in A’s mind (at t) and A bears a certain relation—e.g., internally affirming or being disposed to affirm M.10 Thus, the relation between the believer and the thing believed is mediated by a cognitive relatio
n to an internal mental representation, which presents the content of the belief in the guise of a formula, a sketch, or a sequence of symbols on which mental calculations are performed. The point of insisting on such a representation is to give cognitive computational processes enough structure to explain different inferences drawn from different mental representations in cases in which the propositional contents of the different internal formulas represent the same things as being the same way, and so are true or false in the same circumstances. Although the new cognitive conception of propositions, developed well after Fodor’s paper, reduces the disparity between the cognitive requirements imposed by his explanations and those imposed by the proposition believed, the new conception may not eliminate the need for further symbolic structure of the sort he imagines, in which case his proposal can accommodate the new view of propositions.

  Fodor expresses essentially this point about the need for mental representations to mediate the belief relation between agents and propositions in the following passage.

  A theory of propositional attitudes specifies a construal of the objects of the attitudes [the things desired, believed, known, etc.]. It tells for such a theory if it can be shown to mesh with an independently plausible story about the “cost accounting” for mental processes [how complex, time consuming, and difficult they are]. A cost accounting function is just a (partial) ordering of mental states by relative complexity. Such an ordering is, in turn, responsive to a variety of types of empirical data, both intuitive and experimental [which can be used to confirm or disconfirm hypotheses about conscious and unconscious mental functioning]. Roughly, one has a “mesh” between an empirically warranted cost accounting and a theory of the objects of propositional attitudes when one can predict the relative complexity of a mental state (or process) from whatever the theory assigns as its object [e.g., the proposition or a symbolic mental representation of it].… [T]o require that the putative objects of propositional attitudes predict the cost accounting for the attitude is to impose empirical constraints on the notation of (canonical) belief-ascribing sentences [i.e., sentences which report an agent as believing something]. So, for example, we would clearly get different predictions about the relative complexity of beliefs if we take the object of a propositional attitude to be the … [complement] of the belief-ascribing sentence [“John believes that S”] than if we take it to be, e.g., … [a certain highly complex sentence S* that is a logically equivalent transformation of S].11

 

‹ Prev