Content and Consciousness

Home > Other > Content and Consciousness > Page 11
Content and Consciousness Page 11

by Daniel C. Dennett


  A solely biological, non-Intentional theory of behaviour should be possible in principle, but it would be mute on the topic of the actions (as opposed to motions), intentions, beliefs and desires of its subjects. Moreover, the theory would be very difficult to get to without the understanding provided by the Intentional ascriptions of content. Thus one motive for centralism is that it can provide the physiologists with an invaluable heuristic advantage, as the physiologists have been quick to see; if they cannot view neural events as signals or reports or messages, they are left with almost no view of brain function at all. Were the physiologist to ban all Intentional coloration from his account of brain functioning, his story at best would have the form: functional structure A has the function of stimulating functional structure B whenever it is stimulated by either C or D … No amount of this sort of story will ever answer questions like why rat A is afraid of rat B, or how rat A knows which way to go for his food. If one does ascribe content to events, the system of ascription in no way interferes with whatever physical theory of function one has at the extensional level, and in this respect endowing events with content is like giving an interpretation to a formal mathematical calculus or axiom system, a move which does not affect its functions or implications but may improve intuitive understanding of the system.2

  The heuristic value of giving an Intentional interpretation to events varies, of course, with the complexity of the events and their remoteness from the periphery of the nervous system. There is nothing to be gained by assigning content to the last-rank motor impulses that stimulate muscle contraction, for example. Giving such an event the imperative message ‘contract now, muscle!’ does little to clarify what is going on. Deeper in the brain, however, characterizing a state or event or structure not only as a physical entity operating under certain causal conditions but also as, for example, a specification of a goal or description of the environment or order to perform a certain task would be virtually the only way of ‘making sense’ of neural organization. More important to us here, however, than any aid and comfort Intentional interpretations may give the investigator is the matter of principle. If the idea of content ascription is sound in principle, regardless of how messy or useless it is in practice, it allows the conclusion that natural physical organisms are, with no help from Cartesian ghosts or interacting vital forces, Intentional systems.

  The ideal picture, then, is of content being ascribed to structures, events and states in the brain on the basis of a determination of origins in stimulation and eventual appropriate behavioural effects, such ascriptions being essentially a heuristic overlay on the extensional theory rather than intervening variables of the theory. A centralist theory would consist of two levels of explanation: the extensional account of the interaction of functional structures, and an Intentional characterization of these structures, the events occurring within them, and states of the system resulting from these. The implicit link between each bit of Intentional interpretation and its extensional foundation is a hypothesis or series of hypotheses describing the evolutionary source of the fortuitously propitious arrangement in virtue of which the system’s operation in this instance makes sense. These hypotheses are required in principle to account for the appropriateness which is presupposed by the Intentional interpretation, but which requires a genealogy from the standpoint of the extensional, physical theory.

  This ideal picture will provide a basis for discussion in subsequent chapters, but first there are complications to it which must be described since they have important implications of their own. First, the problem of tracing the link between stimulus conditions and internal events far from the periphery should not be underestimated. Even discounting the ‘ambiguity’ which was seen in Chapter 3 to infect neural signals generally, it is not to be expected that central events can be easily individuated in such a way that they have unique or practically unique sources in external stimulation. Suppose we tentatively identify a certain central event-type in a higher animal (or human being) as a perceptual report with the content ‘danger to the left’. Now probably in higher animals and certainly in human beings we would expect the idea of ‘danger to the left’ to be capable of occurring in many contexts: not only in a perceptual report, but also as part of a dream, in hypothetical reasoning (‘what if there were danger to the left’), as a premonition, in making up a story, and of course in negative form: ‘there is no danger to the left’. What is to be the relationship between these different ways in which this content and its variations can occur? Are we to hope for one extensionally characterized event-type an instance of which occurs whenever this idea in any of its guises occurs, or will the different contexts correlate with regular, law-governed variations of our initial event-type, or will there be one event-type, presumably the original perceptual report event-type, which systematically spawns the second-order event-types which are the signals of imagination, reasoning and so forth? What of belief that there is danger to the left? Belief is not an event, something that happens, but a state (which can sometimes be dated, but cannot be swift or slow), so are we to suppose that the state with this belief-content is established in any typical or regular way by our perceptual report event-type? Certainly for any event or state to be ascribed a content having anything to do with danger to the left, it must be related in some mediated way to a relevant stimulus source, but the hope of deciphering this relation is surely as dim as can be.3

  The problem with behavioural effects is similar. I have held that the claim to intelligent use of information depends on there being appropriate continuations or effects of signals, but how appropriate must they be, and how direct or indirect? How are we to measure potential effects on behaviour without a total knowledge of the functioning of the nervous system? Certainly the less direct the afferent-efferent links are the more difficult it will be to discover that they are at all appropriate. The more room there is for mediation and complexity, the more potentially intelligent a creature will be, but also the more difficult it will be to find detailed evidence that this intelligence in particular cases is due to this or that feature of its neural organization.

  An event, state or structure can be considered to have content only within a system as a whole, and it is this fact that virtually precludes the possibility of content ascription to events, states or structures that are relatively central in any large nervous system. Until one has traced their normal causes and effects all the way to both the afferent and efferent peripheries, one can have no inkling at all of their content. Near the peripheries one can ignore one condition or the other and so determine content of neuronal activity to a first approximation, as e.g., reporting a dark object in the visual field or ordering the raising of a leg, but by ignoring the eventual effects of the former and the central causes of the latter one leaves untouched the fundamental problem of how the brain uses information intelligently, and so one cannot be said to have determined the meaning of the event within the system as a whole.

  The task of ascribing content can be divided into two parts: the individuation by function of neural structures, events and states, and the subsequent framing of messages or contents for them. We have seen that a number of problems make the first half of the task all but impossible. For one thing, the relevant functions that must be determined are not local but global, extending to the peripheries. For another, the events and states that would be good candidates for content-bearers are, at least in the central areas, compound, ambiguous and apparently continuously changing. Difficulties of a different sort affect the second half of the task.

  X LANGUAGE AND CONTENT

  Assigning content to an event must be relating the event to a particular verbal expression. This could be done somewhat fancifully by using the form of direct quotation. The signal says, or tells the brain, ‘food straight ahead’ or ‘turn to the left’ or ‘there’s a pain in your left foot’. Only apparently more austere would be assignments in terms of indirect quotation or propositional attitude: a signal is to
the effect that …, or reports that …, or commands that …, and these are Intentional contexts, as are the forms: reports the presence of x, commands the x to y, etc. This is the point of centralism, to relate meanings to events, and this involves expressing the content of events, since content cannot be described. But then which expressions shall we use?

  At what level of afferent stimulus analysis in the neural net, for example, shall we move from content in terms of events in the sense organs to content in terms of events and objects in the external world? When do signals report not just patterns of excitation on the retina but things seen? In the case of the frog, for example, when do we say the analysis of stimulation has produced a signal about a moving dark object in the environment (the fly) rather than a moving dark area on the retina? It might seem that the answer is that object reference is permissible after convergence of signals from both eyes, or from several sense organs, but the frog will commit itself to a behavioural response on the basis of information from one eye alone. Here our semantic analogy to the effect that reference is determined by stimulus conditions and sense by efferent continuations breaks down. Here the shift from a retinal reference to an object reference must depend on what effect a signal has on behaviour. It is tempting in these cases to confuse a psychological question with an epistemological question. Must we lift, taste, smell and hear an object in addition to seeing it before we have ‘conclusive evidence’ that it is a concrete object in the world, or is seeing enough? Fortunately we do not require conclusive evidence of objectification, whatever that might be, before we act, or we would all starve to death. What our senses ‘tell’ us is not what they prove to us, and the question facing the centralist is what the organism ‘takes the signal to mean’.

  Even if there is a comfortable way of deciding when to raise the level of information to objective reference, there remains the question of how to describe the objects referred to. Let us consider another necessarily crude hypothetical example. A centralist of the future has access to the neural events in Fido’s brain and observes him refusing to venture out on to thin ice to retrieve a succulent steak. He has the following information: an afferent event of type A, previously associated with visual presentations of steaks, has continuations which trigger salivation and also activate a control system normally operating when Fido is about to approach or attack something, but this efferent continuation is inhibited by signals with a source traceable to a previous experience when he fell through thin ice. That is, the centralist has information regarding neural functioning that puts him in a strong position to say that Fido’s behaviour is determined in this case by the stored information that it is dangerous to walk out on thin ice. Such an account would be better substantiated than, for example, ‘Fido did not notice the steak’, ‘Fido has an aversion to smooth horizontal planes’, ‘Fido is overcome by Weltschmerz’. On the basis of his vast knowledge of the functional interrelations in Fido’s nervous system, the centralist assigns certain contents to certain events and structures. Roughly, one afferent signal means ‘there’s a steak’, its continuation means ‘get the steak’, some structure or state stores ‘thin ice is dangerous’ and produces, when operated on by a signal meaning ‘this is thin ice’, another signal meaning ‘stop; do not walk on the ice’. (The point about stimulus conditions and behavioural effects determining content comes out particularly clearly here; no structure or state could be endowed with the storage content ‘thin ice is dangerous’, no matter how it had been produced, if the input of ‘this is thin ice’ did not cause it to produce an appropriate continuation, such as ‘do not walk on the ice’. In the absence of such appropriate functioning one would be bound to conclude that the animal had failed to remember his previous experience, had failed to store intelligently that information, even if there were some clearly identifiable trace in the brain owing its origin to the earlier experience.)

  As soon as we consider any standards of accuracy in content ascription, the particular choices of the centralist in this example begin to look too crude. Does Fido really discriminate the object as a steak, or would ‘meat’ or ‘food’ have been more accurate choices? Presumably the signal’s stimulus conditions are more specific than would be implied by the word ‘food’, and we can expect the dog to show more interest in steak than in dog biscuits, so ‘food’ does not seem to be a good choice from the point of view of either stimulus conditions or behaviour, but ‘meat’ suggests too much. Surely the dog does not recognize the object as a butchered animal part, which is what the word ‘meat’ connotes, and ‘steak’ has even more specific implications. Should we be worried by these implications? Yes, if what we are trying to do is ‘specify the concepts’ that operate in the dog’s direction of behaviour. What the dog recognizes this object as is something for which there is no English word, which should not surprise us – why should the differentiations of a dog’s brain match the differentiations of dictionary English?

  It might seem that we could get at the precise content of the signal by starting with an overly general term, such as ‘food’, and adding qualifications to it until it matches the dog’s differentiations, but this would still impart sophistications to the description that do not belong to the dog. Does the dog have the concept of nourishment that is involved in the concept of food? What could the dog do that would indicate this? Wanting to get and eat x is to be distinguished from recognizing x as food. These hair-splitting objections might lead the zealously rigorous centralist to formulate artificial languages for expressing the content of the events and states he isolates, but to go to such efforts in the name of precision is to lose sight of the essential point and burden of centralism.

  The centralist is trying to relate certain Intentional explanations and descriptions with certain extensional explanations and descriptions, and the Intentional explanations that stand in need of this backing are nothing more than the rather imprecise opinions we express in ordinary language, in this case the opinion that Fido’s desire for the steak is thwarted by his fear of the thin ice. If the centralist can say, roughly, that some feature of the dog’s cerebral activity accounts for his desire to get the steak, and some other feature accounts for his fear (inculcated by certain past experiences) of what he takes to be thin ice, he will be matching imprecision for imprecision, which is the best that can be hoped for.

  Precision would be a desideratum if it allowed safe inferences to be drawn from particular ascriptions of content to subsequent ascriptions of content and eventual behaviour, but in fact no such inferences at all can be drawn from a particular ascription. Since content is to be determined in part by the effects that are spawned by the event or state, the Intentional interpretation of the extensional description of an event or state cannot be used by itself as an engine of discovery to predict results not already discovered or predicted by the extensional theory. Ascriptions of content always presuppose specific predictions in the extensional account, and hence the Intentional level of explanation can itself have no predictive capacity. That is, while it is true that if a person believes that A is the only way to get B, and if he wants B, it follows or can be predicted that he wants A, the centralist cannot use such an Intentional prediction to predict further events in the nervous system, for he could have no evidence that the antecedents of the hypothetical were true (and precise) unless he had already determined or predicted (via his extensional theory) the existence of the state which he would associate with wanting A, and so forth all the way to behavioural manifestations. Since Intentional explanations presuppose appropriateness or rationality, rational coherence is a logical requirement of content ascriptions, but it is no logical requirement of neural function (which may suffer breakdowns or be infelicitously organized in the first place), and therefore inferences made at the Intentional level will be borne out only when neural functional organization achieves ‘ideal’ rationality, something for which there is no guarantee, and no way to check independently of extensional level determinations of function.
From any portion of the Intentional story a further portion can be generated only on the assumption that the ascriptions of content so far made are ‘accurate’, and to test this assumption one must see if what one generates on the basis of these ascriptions is borne out by details of the extensional story. The ascription of content is thus always an ex post facto step, and the traffic between the extensional and Intentional levels of explanation is all in one direction.

  This feature can be easily overlooked by investigators in memory mechanisms, who occasionally speak as if they were looking for word-analogues and sentence-analogues in the brain. A sentence token (a particular occurrence of a sentence) is a token of a particular sentence type in virtue of its having certain syntactic parts (word tokens) and a certain syntactic structure (the ordering of the word tokens), and these features of the thing or event, the sentence token, serve to restrict and determine – in ways very difficult to describe – the function the thing or event has within a particular system, say Jones and Smith conversing in English. Thus when Jones says ‘pass the salt’, the likely effect of this utterance event on the system Jones-Smith is in part based on internal (in this case phonological) traits of the event to which we ascribe content. The event has syntactic parts that can be read off (by anyone who understands English). There is no guarantee, however, that the things and events making up the Intentional system that is a particular creature will have analogous syntactic parts or structures at all and, if they do not, there is no guarantee that they will have their functions restricted in ways much like the ways in which sentence tokens have their functions restricted. It is possible, perhaps, that the brain has developed storage and transmission methods involving syntactically analysable events or structures, so that, for example, some patterns of molecules or impulses could be brain-word tokens, but even if there were some such ‘language’ or ‘code’ or what Zeman calls ‘the brain writing which people have in common regardless of their nationality and other differences’,4 there would also have to be mechanisms for ‘reading’ and ‘understanding’ this language. Without such mechanisms, the storage and transmission of sentence-like things in the brain would be as futile as saying ‘giddyap’ to an automobile. These reading mechanisms, in turn, would have to be information processing systems, and what are we to say of their internal states and events? Do they have syntactically analysable parts? The regress must end eventually with some systems which store, transmit and process information in non-syntactic form. Of all the common analogies used to describe the brain, the analogy of a community of correspondents (which is the inevitable suggestion whenever there is talk of codes and languages in the brain) is the most far-fetched and least useful. It has the disadvantage of merely postponing the central problem before us by positing unanalysed mananalogues as systematic elements in that which we are trying to analyse, namely Man. The ‘little man in the brain’, Ryle’s ‘ghost in the machine’, is a notorious non-solution to the problems of mind, and although it is not entirely out of the question that the ‘brain-writing’ analogy will have some useful application, it does appear merely to replace the little man in the brain with a committee.5

 

‹ Prev