Content and Consciousness

Home > Other > Content and Consciousness > Page 13
Content and Consciousness Page 13

by Daniel C. Dennett


  In one respect the distinction between the personal and sub-personal levels of explanation is not at all new. The philosophy of mind initiated by Ryle and Wittgenstein is in large measure an analysis of the concepts we use at the personal level, and the lesson to be learned from Ryle’s attacks on ‘para-mechanical hypotheses’ and Wittgenstein’s often startling insistence that explanations come to an end rather earlier than we had thought is that the personal and sub-personal levels must not be confused. The lesson has occasionally been misconstrued, however, as the lesson that the personal level of explanation is the only level of explanation when the subject matter is human minds and actions. In an important but narrow sense this is true, for as we see in the case of pain, to abandon the personal level is to stop talking about pain. In another important sense it is false, and it is this that is often missed. The recognition that there are two levels of explanation gives birth to the burden of relating them, and this is a task that is not outside the philosopher’s province. It cannot be the case that there is no relation between pains and neural impulses or between beliefs and neural states, so setting the mechanical or physical questions off-limits to the philosopher will not keep the question of what these relations are from arising. The position that pains and beliefs are in one category or domain of inquiry while neural events and states are in another cannot be used to isolate the philosophical from the mechanical questions, for, as we have seen, different categories are no better than different Cartesian substances unless they are construed as different ontological categories, which is to say: the terms are construed to be in different categories and only one category of terms is referential. The only way to foster the proper separation between the two levels of explanation, to prevent the contamination of the physical story with unanalysable qualities or ‘emergent phenomena’, is to put the fusion barrier between them. Given this interpretation it is in one sense true that there is no relation between pains and neural impulses, because there are no pains; ‘pain’ does not refer. There is no way around this. If there is to be any relation between pains and neural impulses, they will have to be related by either identity or non-identity, and if we want to rule out both these relations we shall have to decide that one of the terms is non-referential. Taking this step does not answer all the philosophical questions, however, for once we have decided that ‘pain’-talk is non-referential there remains the question of how each bit of the talk about pain is related to neural impulses or talk about neural impulses. This and parallel questions about other phenomena need detailed answers even after it is agreed that there are different sorts of explanation, different levels and categories. There is no one general answer to these questions, for there are many different sorts of talk in the language of the mind, and many different phenomena in the brain.

  Part II

  Consciousness

  5

  INTROSPECTIVE CERTAINTY

  XII THE CERTAINTY OF CERTAIN UTTERANCES

  The most central feature of mind, the ‘phenomenon’ that seems more than any other to be quintessentially ‘mental’ and non-physical, is consciousness. In the chapters to follow, consciousness will be analysed from both the personal and sub-personal points of view, and the major advantage to be gained from paying attention to possible sub-personal accounts of consciousness will be that it will allow us to see that consciousness is not one feature or phenomenon or aspect of mind, but several. Once the term ‘consciousness’ is seen to allude to an incompatible congeries of features, and these features are sorted out and described, many of the most stubborn perplexities in philosophy of mind dissolve. The quest for a plausible and consistent analysis of consciousness develops into the hunting down of that elusive quarry, the little man in the brain, who is driven first from his role as introspector only to reappear as perceiver, reasoner, intender and knower. Since Ryle’s Concept of Mind, we all scoff at the notion of this little man, but scoffing is not enough. Expelling him from our thinking about mind requires, I hope to show, more radical alterations in our views of mental phenomena than are usually envisaged. It is one thing to exorcize the ghost in the machine, but he can reappear in more concrete form, as, for example, a stimulus-checking mechanism or – as we have seen – as a brain-writing reader, and in these guises he is equally subversive.

  Our avenue to consciousness in ourselves is generally held to be the faculty of introspection, and our avenues to consciousness in others are their introspective reports. Getting at the putative phenomenon of consciousness requires that we first understand these modes of access, and the traditional problem with these is that they seem to be infallible in some strange way; we seem to have certainty about the contents of our own thoughts.

  The intuited commonplace that we cannot be mistaken about the content of our own consciousness has been variously expressed and explained in the philosophical literature. The picture, due ultimately to Descartes, of the introspector infallibly perusing the presentations of consciousness has been generally acknowledged as confused, but the alternatives proposed have so far fallen short of giving a satisfactory account. The most promising rivals to the Cartesian view all start from the observation that since any referring, factual report can be mistaken, our introspective utterances must not be referring, factual reports. Thus Wittgenstein holds (or is often held to hold) that the invulnerability to error of pain reports is due to the fact that ‘the verbal expression of pain replaces crying and does not describe it’ – and hence is not a report at all, but akin to such other behavioural manifestations as writhing and crying.1 Ryle adopts a similar position in The Concept of Mind, saying that reports of pain are ‘avowals’, not assertions.2 Miss Anscombe’s solution is to claim that pain reports and some other introspective reports are not cases where we have knowledge of what we say, but where we merely can say what we say: ‘there is point in speaking of knowledge only where a contrast exists between “he knows” and “he (merely) thinks he knows” ’.3 These views all have in common the move of making introspective reports the sort of things to which ‘right’ and ‘wrong’ or ‘true’ and ‘false’ do not apply, but in a variety of ways they are implausible. When I tell the doctor the pain is in my big toe I am certainly not just doing a sophisticated bit of whining, as Wittgenstein’s view suggests, for I fully intend to inform the doctor. Ryle’s view suffers from a parallel defect, and both views, however plausible they can be made for reports of pain, become highly implausible when other introspective utterances are considered. Anscombe’s view is plausible until one asks how she proposes to distinguish the fact that I can say all sorts of gibberish from the fact that I can say where my pain is. Her view depends on the sense of ‘can say’ which is the same as ‘can tell’, and ‘can tell’ reintroduces the notion of truth and the accompanying question of how we can tell. The answer to this question is that we just can, that’s all. In § 11 it was claimed that explanations in terms of pains and persons’ reports of pains do reach an abrupt halt at this point; there is nothing more to be said from this stance, but from another stance an explanation can be given of this primitive ‘ability’ we have.

  These three views are on the right track in attempting to avoid the Cartesian view of the infallible reporter, the impossibility of which can be seen by noting its analogical character. Since a reporter, a human being, can wrongly identify what he sees (what things are out there), merely moving him ‘inside’ and making him an introspecting whatever-it-is is not going to ensure that he will infallibly report experiences (what things are in here). One cannot have reports without a reporter, so the notion of infallible reports must just be wrong. Where the three views go off on a wrong track is in supposing that the solution can be given at the personal level of explanation. All three views deny, from the stance of ordinary mental language talk about pains, thoughts and so forth, that introspective utterances are – from this stance – what they so manifestly are: reports of pains, thoughts and so forth that can, like any reports, be true or false. The reporter of me
ntal experiences is, as everyone knows, the person himself, and what he is doing is reporting, not moaning or avowing or engaging in a sort of glossolalia to which questions of truth do not apply. We cannot answer the question of how these reports are infallible by denying that they are reports. If we are unsatisfied – as I think we must be – with an early end to explanation here, namely that introspective reports just are infallible, we must abandon the personal level and ask a different question: how can introspective utterances be so related to certain internal conditions that they can be viewed as error-free indications of these internal conditions? The relationship between this question and the earlier one is not at all obvious, but will become clear once an answer is sketched out.

  At the sub-personal level, the key to the solution of the problem lies in the distinction between a functional or logical state of a system and a physical state. Putnam first pointed out the remarkable and fruitful analogy ‘between logical states of a Turing machine and mental states of a human being, on the one hand, and structural states of a Turing machine and physical states of a human being, on the other.’4 Turing devised a general way of describing the organization of any computer or automaton in terms of an ordered collection of logical states, which are completely specified in a machine table by their relations to each other and to the input and output of the automaton, but whose physical realization in ‘hardware’ is left open. Any system for which a machine table can be specified is a Turing machine, and a particular Turing machine (as characterized by a particular machine table) might be built in a variety of very different ways, e.g., directly out of electronic components, or ‘simulated’ in an existing computer, or with hydraulic valves and plumbing, or by a large room full of people given certain tasks. Thus one identifies a Turing machine by the functional interrelation of its states, not by its physical constitution, and, similarly, a logical state is the state it is in virtue of its relations to other states and the input and output, not its physical realization or characteristics. A particular machine T is in logical state A if, and only if, it performs what the machine table specifies for logical state A, regardless of the physical state it is in. Putnam explains:

  Now let us suppose that someone voices the following objection: ‘In order to perform the computation [of the 3,000th digit of p] just described, T must pass through states A, B, C, etc. But how can T ascertain that it is in states A, B, C, etc.?’

  It is clear that this is a silly objection. But what makes it silly? For one thing, the ‘logical description’ (machine table) of the machine describes the states only in terms of their relations to each other and to what appears on the tape. The ‘physical realization’ of the machine is immaterial, so long as there are distinct states A, B, C, etc., and they succeed each other as specified in the machine table. Thus one can answer a question such as ‘How does T ascertain that X?’ (or ‘compute X’, etc.) only in the sense of describing the sequence of states through which T must pass in ascertaining that X (computing X, etc.), the rules obeyed, etc. But there is no ‘sequence of states’ through which T must pass to be in a single state! Indeed, suppose there were – suppose T could not be in state A without first ascertaining that it was in state A (by first passing through a sequence of other states). Clearly a vicious regress would be involved. And one ‘breaks’ the regress simply by noting that the machine, in ascertaining the 3,000th digit in p, passes through its states – but it need not in any significant sense ‘ascertain’ that it is passing through them.5

  Suppose T ‘ascertained’ it was in state B; this could only mean that it behaved or operated as if it were in state B, and if T does this it is in state B. Possibly there has been a breakdown so that it should be in state A, but if it ‘ascertains’ that it is in state B (behaves as if it were in state B) it is in state B.

  Now suppose the machine table contained the instruction: ‘Print: “I am in state A” when in state A.’6 When the machine prints ‘I am in state A’ are we to say the machine ascertained it was in state A? The machine’s ‘verbal report’, as Putnam says, ‘issues directly from the state it “reports”; no “computation” or additional “evidence” is needed to arrive at the “answer”.’ The report issues directly from the state it reports in that the machine is in state A only if it reports it is in state A. If any sense is to be made of the question, ‘How does T know it is in state A?’, the only answer is degenerate: ‘by being in state A’. ‘Even if some accident causes the printing mechanism to print: “I am in state A” when the machine is not in state A, there was not a “miscomputation” (only, so to speak, a “verbal slip”).’ Putnam compares this situation to the human report ‘I am in pain’, and contrasts these to the reports ‘Vacuum tube 312 has failed’ and ‘I have a fever’. Human beings have some capacity for the monitoring of internal physical states such as fevers, and computers can have similar monitoring devices for their own physical states, but when either makes a report of such internal physical conditions, the question of how these are ascertained makes perfect sense, and can be answered by giving a succession of states through which the system passes in order to ascertain its physical condition. But when the state reported is a logical or functionally individuated state, the task of ascertaining, monitoring or examining drops out of the reporting process.

  A Turing machine designed so that its output could be interpreted as reports of its logical states would be, like human introspectors, invulnerable to all but ‘verbal’ errors. It could not misidentify its logical states in its reports just because it does not have to identify its states at all. If the analogy to human introspection is to be more than just suggestive, however, we must develop a more detailed picture of a machine which makes ‘introspective’ reports.

  XIII A PERCEIVING MACHINE

  We want to describe a machine that would report its ‘mental experiences’ with the infallibility of human introspectors. Such a machine will require quite a sophisticated print-out capacity for making its reports, and, if the analogy is going to be convincing in detail, we must first consider how the human behaviour of speech might be controlled by neural mechanisms. It would be naïve to suppose that introspective reports, or indeed any human utterances, are the immediate functions of any interesting internal logical states, on the model of Putnam’s machine print-out ‘I am in state A’. The production of speech is highly mediated by systems into which at present we have only meagre insights, but some general details of speech controls can be derived from an examination of the structure of language itself. The utterances of a natural language vary in certain rule-governed ways, and could only be produced by systems having certain sorts of organization. Chomsky and others have initiated important research in this area, and one of the most important implications of their work is that the controls of linguistic behaviour must be hierarchically rather than serially arranged.7 There must be a control for the whole sentence or utterance that precedes and directs the production of each word or phoneme in turn. Applying the loose notion of content ascription to these hierarchies, we can describe a hierarchy of commands. Last-rank efferent events hardly need be given content; their commands amount to ‘contract muscle’ or, slightly higher, ‘tongue forward’, and so forth. The commands organizing these would be phonemic, ‘utter: “o” ’; at the next level up, events would control the organization of phonemic sequences. Here the command should not be in the form of ordinary quotation (‘utter: “the cat is on the mat” ’), since, for example, ‘bear’ and ‘bare’ are phonemically equivalent and hence not distinguishable at this level of control. In some cases of verbal behaviour the goal is merely the production of a phonemic sequence, as, for example, in beginning foreign language class drills, and the higher controls of these activities would be commands of the form ‘utter: …’ followed by a phonemic sequence, and the only command of interest above that would be ‘mimic the teacher’ or something like that.

  In slightly different cases such as taking an oath, reciting a poem or, in general, quot
ing someone or some document, higher commands would have an oratio recta (direct quotation) content: ‘say: “I do solemnly swear …” ’, and the overriding control might be given the content ‘recite what’s put before you’. The elaboration of these controls would, of course, differ in different people; the child may call out one word at a time while the adult reciter’s controls may govern the production of whole phrases or sentences.

  What is missing from these cases but is normally present in verbal behaviour controls is a command with oratio obliqua (indirect quotation) content: ‘say that …’,‘ask him whether …’. Here, in contrast to the cases of recitation or quotation, what is to be done can be done in a number of different ways, what is to be said can be expressed variously. Not all such controls would need to be given oratio obliqua contents, especially where what is to be performed is a speech act for which we have a name. ‘Apologize’ might be used as the content of an event at a level one step higher than the oratio recta commands ‘say: “pardon” ’, ‘say: “excuse me” ’, and ‘say: “terribly sorry” ’. It is tempting to go overboard at this stage and decide that the variations in event content at this level coincide with variations in our inner thoughts and on the basis of this proclaim the identity of thoughts with this sort of postulated brain process. For example, the event to which we gave the content ‘apologize!’ on the basis of behavioural effect might be adjusted on the basis of stimulus conditions or more central causes so that it was given, in one case, a content coincident with the thought that one was genuinely sorry, and in another, a content coincident with the thought that protocol demanded an apology, but we shall see that there are obstacles in the way of making such a straightforward identification.

 

‹ Prev