Content and Consciousness

Home > Other > Content and Consciousness > Page 6
Content and Consciousness Page 6

by Daniel C. Dennett


  In this way Intentional explanations assume the environmental appropriateness of the connections between antecedent and consequent.30 Thus there is a sense in which Intentional explanation is just the reverse of extensional, behaviouristic explanation. Behaviourism seeks to find regularities and mechanisms that will explain the observed appropriateness or adaptiveness of the connections between antecedent and consequent events. Animal behaviour is generally appropriate to the environmental circumstances in which it occurs, and it is this ability to match behaviour to environment that the behaviourist tries to analyse by finding sequences of events that can be subsumed under general causal laws.

  For Intentional explanation, on the other hand, the fact that one event (as Intentionally characterized) is followed in an appropriate way by another is not even contingent, and hence not subject to explanation.31 The intention to raise one’s arm would not be the intention to raise one’s arm if it were not followed, barring interference, with raising one’s arm, so the question of why one follows the other is superfluous.

  It is this early end to explanation that puts Intentional science in disharmony with the rest of science. As Pittendrigh observes, the appropriateness or adaptedness of animal action implies organization (which he distinguishes from mere order as being relative to an end). ‘An organization is an improbable state in a contingent … universe; and as such it cannot be merely accepted, it must be explained.’32 Thus the very feature which signals an end to explanation in the Intentional system signals the need for explanation in the wider system of science as a whole. The two sciences are not just separate, they are warring, for positions on what does and does not require explanation cannot be isolated within autonomous branches of science. If adaptedness of animal behaviour admits of, and requires, no explanation, then the improbable organization of which Pittendrigh speaks requires no explanation, and if this is so, we must either abandon the principle that the improbable requires explanation – which would amount to the abandonment of the rest of science – or we must maintain that such organizations are not improbable states of the universe, which would require a total bouleversement of the physical sciences. The behaviourist has this much going for him: he is neither anarchist nor revolutionary. The same cannot be said for the Intentionalist.

  V THE WAY OUT

  To sum up the results of the chapter so far, the effect of the Intentionality thesis is to give the old, ill-envisaged dogma that the mind cannot be caged in a physical theory a particularly sharp set of teeth. The first challenge is the irreducibility hypothesis, that the Intentional cannot be reduced to the non-Intentional, or, as we have seen, the extensional. Then the evidence comes in that we can neither do without the Intentional, nor cleave to it alone, for there are signs that the possibility is remote of a successful non-Intentional behaviourist psychology; and the alternative of an entirely Intentional psychology would entail a catastrophic rearrangement of science in general. This is not a formal dilemma, since on the one hand a forlorn hope may be held out that some future behaviourist will be able to belie the many harbingers of doom and produce a working non-Intentional theory, and on the other hand there are certainly some scientific revolutionaries who would relish a return to an anthropocentric and teleological world view at the expense of the current centrality of modern physics.

  Fortunately, however, once the problem of Intentionality is clearly expressed, it points to its own solution. There is a loophole. The weak place in the argument is the open-endedness of the arguments that no extensional reduction of Intentional sentences is possible. The arguments all hinged on the lack of theoretically reliable overt behavioural clues for the ascription of Intentional expressions, but this leaves room for covert, internal events serving as the conditions of ascription. We do not ordinarily have access to such data, so they could not serve as our ordinary criteria for the use of ordinary Intentional expressions, but this is just a corollary of the thesis that our ordinary language accounts of behaviour are Intentional, and says nothing about the possibility in principle of producing a scientific reduction of Intentional expressions to extensional expressions about internal states. Could there be a system of internal states or events, the extensional description of which could be upgraded into an Intentional description? The answer to this question is not at all obvious, but there are some promising hints that the answer is Yes.

  The task of avoiding the dilemma of Intentionality is the task of somehow getting from motion and matter to content and purpose – and back. If it could be established that there were conceptually trustworthy formulations roughly of the form ‘physical state S has the significance (or means, or has the content) that p’ one would be well on the way to a solution of the problem. But if that is all it takes, the answer may seem obvious. Computers, we are told, ‘understand’ directions, send each other ‘messages’, ‘store the information that p’ and so forth, and do not these claims imply that some physical states of computers have content in the requisite sense? A hallmark of Intentional organisms pointed out by Taylor is that an Intentional description is one for the organism, for example the condition that is antecedent to intentional action is the condition of the environment as seen by the organism, but is it not true that the activities or motions of any cybernetic device are also relative only to the environmental condition as ‘seen’ by the device? People who use computers are accustomed to describing the operation of their devices in Intentional terms. If they are justified in speaking this way – and are not merely speaking ‘metaphorically’ – the Intentionalist claim will be threatened, for then at least one sort of purely physical object will be understood as an Intentional system. It can be pointed out now that there is one serious flaw in our ‘hint’ however. A computer can only be said to be believing, remembering, pursuing goals, etc., relative to the particular interpretation put on its motions by people, who thus impose the Intentionality of their own way of life on the computer. That is, no electrical state or event in a computer has any intrinsic significance, but only the significance gifted it by the builders or programmers who link the state or event with input and output. Even the production of ink marks on the output paper has no significance except what is given it by the programmers. Thus computers, if they are Intentional, are only Intentional in virtue of the Intentionality of their creators. People and animals, however, are not designed and manufactured the way computers and their programmes are, nor are they essentially in the service of interpreting, Intentional beings. (One could turn the argument around; then it becomes a rather top-heavy argument for the existence of an Intentional God – none of your theistic, abstract Gods – whom we are designed to serve.) If we are to avoid the God hypothesis, we must look elsewhere for a source of Intentionality in living systems; we must find something else to endow their internal states with content.

  Following a well-beaten path, we can look to the theory of evolution by natural selection. The interpenetration of content and purpose has already been seen in the implication circle of belief and intention (see pp. 33–4), so it should not prove too surprising if the ability of the theory of natural selection to account for the apparent purpose-relativity of organs and capacities of living things is also the ability to account for the content of certain of their states. Stronger links can be dimly seen. Intentional description presupposes the environmental appropriateness of antecedent-consequent connections; natural selection guarantees, over the long run, the environmental appropriateness of what it produces.

  An investigation of this avenue will take up the next few chapters. It can hardly be called striking out on a new trail. Considerable work has been done in what might be called the theory and construction of Intentional systems, but never to my knowledge has an attempt been made to spell out what the obligations and goals of this programme are. The burgeoning fields of information theory and ‘artificial intelligence’ have produced a wealth of ‘models’ which may deserve to be called Intentional systems, but the questions of whether or not these
models do deserve this appellation, and whether or not there can be natural Intentional systems along the lines of these models are questions to which little attention has been paid.

  Theories of mind or behaviour in this general category are called ‘centralist’, in contrast to the ‘peripheralist’ theories of Stimulus-Response behaviourism. While the peripheralist hopes to characterize behavioural events and stimulation extensionally from the beginning, and arrive at extensional laws relating these, the centralist makes his initial characterization Intentional, describing the events to be related in law-like ways using either ordinary, or semi-ordinary, or even entirely artificial Intentional expressions. He then hopes that an adequate physical basis can be found among the internal states and events of the organism so that ‘reductions’ of Intentional sentences of the theory to extensional sentences of the theory are possible.33 The ground rules for such ‘reductions’ have not been set down, and this is one of the tasks of the next few chapters. A rudimentary excursion into neurology and information theory is unavoidable, and both fields are jungles of conflicting claims and theories. In part to avoid taking sides in these controversies, hypotheses will be put forward that ‘leave the details to the neurophysiologist’, and although every care will be taken to provide that when this is done the hypothesis in question is compatible with whatever details the neurophysiologist might come up with, this is admittedly armchair science with its attendant risks. The hope is that a strong case can be made for the theoretical underpinnings of centralism, leaving as much room for empirical variation as possible. The examination of centralism will yield a number of significant philosophical by-products, having to do especially with consciousness, reasoning and intention, and these will be developed in Part II.

  3

  EVOLUTION IN THE BRAIN

  VI THE INTELLIGENT USE OF INFORMATION

  In Chapter 1 we found a way of sidestepping the old and sterile problem of the ontological status of mental entities. In the place of an ontological division between phenomena or entities, we acknowledged only a division between the different things that we say, roughly characterized as a division between the mental language and the language of science, or physical language. In Chapter 2 this division was seen to coincide on a wide front, if not entirely, with the distinction between Intentional sentences and extensional sentences, and this raised a fundamental obstacle to our further efforts at relating mind to body, in the form of the Intentionalist thesis that it is logically impossible to ‘reduce’ the Intentional mode of discourse to the extensional. Acquiescence in this conclusion would leave large portions of our mental language discourse inexplicable in terms of the physical sciences. Two attempts to get around the Intentionalist thesis were found unpromising. Attempts at a purely extensional peripheralist science of behaviour have simply failed to marshall their data into a working theory, and the failures bear all the earmarks of fundamental theoretical error; and an ‘autonomous science of Intention’ cannot co-exist with the rest of science. Since we apparently cannot do without the Intentional, and cannot allow it to remain irreducible, the only course left is a more direct assault on the Intentionalist thesis. The weak point in the arguments for the thesis was seen to be the reliance on overt, external behavioural cues as the benchmark of extensional correlates. Would an examination of internal states and events gain us any leverage over peripheralist accounts and allow us to prove the Intentionalist thesis wrong?

  The theory that could do this would have to upgrade an extensional account of the system of relations of internal, cerebral states and events into Intentional characterizations of these states and events, i.e., as events related to a content or message or meaning, events signifying or reporting or commanding. This is of course a standard practice of neurophysiologists in expositions of their findings: they talk of neural signals, reports to the brain from the sense organs, and so forth, but this talk is largely fanciful, and the rationale and justification of this step needs to be examined. What, if anything, permits us to endow neural events with content? Can the rules governing this step of theory be generalized to allow us to speak confidently of neural events bearing contents approximating to ‘the contents of our thoughts, perceptions and intentions’? To begin to answer these questions in this chapter we must venture into the area of neurophysiological hypothesis, stepping as lightly as possible, to see what the general shape of a theory would have to be to meet these requirements. That is, we shall investigate certain minimum, necessary conditions any centralist theory would have to meet, postponing until Chapter IV the question of whether a theory meeting these conditions has met the sufficient conditions for ascription of content to neural states or events.

  We can call behaviour Intentional when it is of the sort that we normally characterize in Intentional terms, the sort that resists all efforts at extensional characterization. Thus searching for acorns and remembering to close the door are examples of Intentional behaviour, while stumbling, chewing and simply closing the door are not. We need not try to draw the line with precision since there are plenty of central cases we can consider before reaching any decisions about the penumbra, but as a general rule a bit of behaviour is non-Intentional if we could quite easily construct a device that performed it (a door-closer, a food-chewer), and is Intentional if it is not at all obvious that anything we might build could be said to be doing it (can we imagine a device which could be said, quite literally and unfancifully, to remember to close the door, to search for acorns, to believe it is raining?). Aficionados of robots and those familiar with the claims made by workers in the area of computer simulation of behaviour will perhaps reply that such devices already exist, but it is just these claims, among others, that we are scrutinizing. The controls and activities of computers can certainly be given an extensional description, and if they can also be characterized justifiably in Intentional terms we shall have one case of an Intentional-extensional reduction, and hence good reasons for expecting a similar reduction in the case of animals and people. The strength of the analogy between human behaviour and computer behaviour is thus a critical point which we will examine from a number of different points of view.

  No creature could exhibit Intentional behaviour unless it had the capacity to store information. For example, for a creature to exhibit genuine goal-directed behaviour, the goals the creature had would have to be ‘carried within it’ somehow, and ignoring animistic or mystical answers to the question how, the method of maintaining these goals within the creature will have to be some form of storage in its material organization. Moreover, the type of storage required must be what I shall call intelligent storage, the word ‘intelligent’ being used only as a tag for the time being, so as not to prejudge any questions about what constitutes genuine intelligence. This notion of intelligent storage can best be made clear by the use of a few examples. Often when a computer is said to store information the storage is nothing more than the capacity to produce a sequence of characters in response to a particular cue. Thus one can store whole books in a computer memory and on giving the input, say, ‘Middlemarch’, one would receive as output the lengthy typing out of the novel word for word. A computer used this way is, of course, nothing radically more than a tape-recorder with an automatic indexing system, and its storage does not differ in type from old-fashioned library-shelf storage; only the mechanics of storage and retrieval are different. Neither the computer nor the library could be said in any sense to understand what was stored. Indeed this storage can be called information storage only by grace of the fact that the users of the output can interpret it as information. One might speak of mountains storing geological and palaeontological information this way – all in precise sequence waiting to be interpreted. Intelligent storage differs from this in that the information stored can be used by the system that stores it, from which it follows that the system must have some capacity for activity other than the mere regurgitation of what is stored. What counts as using the information is hard to say in ma
ny cases, but some computer programmes ‘do enough’ with the data they are fed to be strong candidates for the honour of intelligent storage. To take a different example from the animal world, a parrot might have the ability to say ‘fire hurts’ and it might also exhibit fire-avoidance behaviour, but in the parrot’s case we would not suppose there was any connection between the ‘verbal’ and non-verbal behaviour, unless, of course, the parrot, contrary to all we know about parrots, only spoke his little piece when the occasion called for just such a warning. The ‘verbal’ capacity of the parrot is a clear case of non-intelligent information storage, while his capacity to learn from experience in such a way that his behaviour improves in prudence is what I shall call the capacity for intelligent storage of information. The parrot, in learning to say ‘fire hurts’, does not store the information that fire hurts (at least it is not information for the parrot), even though we can imagine someone using the parrot – as one might use a writing tablet or tape-recorder – to store this information non-intelligently.

 

‹ Prev