Hare Brain, Tortoise Mind

Home > Other > Hare Brain, Tortoise Mind > Page 5
Hare Brain, Tortoise Mind Page 5

by Guy Claxton


  What is the relationship between implicit know-how – the practical intelligence that enables us to function well in the world – on the one hand, and the explicit, articulated understanding that d-mode delivers on the other? It is widely assumed, in education and elsewhere, that conscious comprehension – being able to articulate and explain – is of universal benefit. To understand how and why to do something ought to help us to do it. But does it? In the case of the adults’ response to the Rubik cube, it seems as if there is an acquired need to understand which may actually block the use of our non-intellectual ways of knowing. We have forgotten them, or do not ‘believe’ in them any more. There is now good evidence that this suspicion is well founded.

  The ‘stupid cube’ effect appeared in Broadbent and Berry’s studies. Not only does people’s intuitive ability to control the factory output develop much faster than their ability to explain what they were doing; their confidence in their ability tends to follow their explicit knowledge, rather than their know-how. Unless they are able to explain what they are doing, they tend to underestimate quite severely how well they are doing it. People feel as if they are merely guessing, even when they are in fact doing well, and, if they had felt free to, many of the subjects would probably have withdrawn from the game, for fear of looking foolish. It is only because they would have felt even more foolish dropping out that they persevered with the task, despite their lack of confidence – and actually gave their unconscious learning a chance to reveal itself. The subjects have learnt to put their faith in d-mode as the indicator of how much they know, and therefore to distrust, at least initially, perfectly effective knowledge that has not (yet) crystallised into a conscious explanation.

  You would at least imagine that there would be a positive link between the two kinds of knowledge, implicit and explicit: that people’s sense of having a conscious handle on what they are up to should correlate with how well they are in fact doing. After all, we expect airline pilots and medical students to take written examinations, as well as practical ones, so we must assume that the verbal tests of knowledge and understanding are assessing something relevant. Unfortunately, this does not always seem to be the case. In several investigations of the Broadbent and Berry type, people’s ability to articulate the rules which they think are underlying their decisions turns out to be negatively related to their actual competence.2 People who are better at controlling the situation are actually worse at talking about what they are doing. And conversely, in some situations it appears that the more you think you know what you are doing, the less well you are in fact doing. You can either be a pundit or you can be a practitioner, it appears; not always can you become both by the same means.

  The situations where this dislocation between expertise and explanation appears most strongly are those that are novel, complicated and to some extent counterintuitive; where the relevant patterns you need to discover are different from what ‘common sense’ – the ‘reasonable assumptions’ on which d-mode rests – might predict.3 In situations where a small number of factors interact in a predictable fashion, and where these interactions are in line with what seems ‘plausible’ or ‘obvious’, then d-mode does the job, and trying to figure out what is going on can successfully short-circuit the more protracted business of ‘messing about’. But where these conditions are not met, then d-mode gets in the way. It is not the right tool for the job, and if d-mode is persistently misused, the job cannot be successfully completed. Trying to force the situation to fit your expectations, even when they are demonstrably wrong, allows you to continue to operate in d-mode – but prevents you from solving the problem.

  For example, consider a classic experiment performed by Peter Wason of the University of London. Undergraduate students were shown the three numbers 2, 4 and 6 and told that these conformed to a rule that Wason had in mind.4 The students’ job was to generate other trios of numbers – in response to which Wason would say whether they did or did not conform to the rule – until they thought they knew what the rule was, at which point they should announce it. Typically the conversation would go like this.

  Student: 3, 5, 7?

  Wason: Yes (that meets the rule).

  S: 10, 12, 14?

  W: Yes.

  S: 97, 99, 101?

  W: Yes.

  S: OK, the rule is obviously n, n+2, n+4.

  W: No it isn’t.

  S: (very disconcerted) Oh. But it must be!

  The problem is that the students thought the rule was obvious from the start, and were making up numbers only with the intention of confirming what they thought they already knew. If their assumption had been correct, their way of tackling the problem would have been logical and economical. But when what is plausible is not what is actually there, those operating in this manner are in for a nasty shock. In fact, Wason’s rule is much more general: it is ‘any three ascending numbers’. So ‘2, 4, 183’ would have been a much more informative combination to try – even though, to someone who thinks they know the rule, it looks ‘silly’.

  When d-mode is disconcerted like this, it often responds by trying all the harder. Instead of flipping into a more playful or lateral mode, in which silly suggestions may reveal some interesting information, people start to devise more and more baroque solutions. ‘Ah ha,’ they may think, ‘maybe the rule is the middle number has to be halfway between the first and the third. So let’s try 2, 5, 8 and 10, 15, 20.’ When Wason agrees that these too conform to the rule, they heave a sigh of relief – only to be flummoxed once again when they announce the rule and are told it is incorrect. Or, even more ingeniously, they may cling to the original hypothesis – which they have clearly been told is not the solution – by rephrasing it. So they might say, ‘OK, it’s not n, n+2, n+4, but perhaps it is take one number, add four to it to make the third number, and then add the first and third together and divide by two to get the middle number’ – which is, of course, the same thing. Having articulated a misleading account, people then proceed to use this faulty map to guide their further interactions with the task, rather than relying on the ability of trial and error, ‘messing about’, to deliver the knowledge they need. Attention gets diverted from watching how the system actually behaves to trying to figure out what is going on, and using these putative explanations as the basis for action.

  What happens when you introduce into the Broadbent and Berry task some instruction, in the form of potentially helpful hints and suggestions? Does this give learners a head start, or does it handicap them? Again, conventional educational ‘wisdom’ would strongly back the former, and once more the research shows that things are not so straightforward. Conscious information is not always an asset, especially when it is given early on in the learning process, or when it serves to direct attention to features of the situation which may be true, but which are not strictly relevant to the way it behaves, or which interact in unexpected ways with other features.

  For example, if, in the factory task, I give you a hint that the workers’ age is worth paying attention to, this information may send you off on a mental wild-goose chase if it eventually turns out that what matters (in this hypothetical factory) is doing the job not too fast and not too slow – and that work-rate is related to age, so that people in their thirties and forties are to be preferred to those in either their twenties (who are too quick) or their fifties (who are too slow). If this correlation is something that would never have occurred to you, then my suggestion has flipped you into what is a doomed attempt to try to understand how age is relevant, and, by the same token, diverted you away from just seeing what happens. People tend to assume, quite reasonably, that the information they have been given ought to be useful, so they keep trying to use it, even when that is not the best thing to do. And in doing so, they may effectively starve the unconscious brain-mind of the rich perceptual data on which its efficacy depends. The time when some instruction may be of practical benefit, it turns out, is later on, after the learner has had time to
build up a solid body of first-hand experience to which the explicit information can be related.

  The fact that giving instruction and advice, in the context of developing practical expertise, is a delicate business, is well known (or should be) to sports coaches, music teachers, and trainers of management or other vocational skills. Most coaches and trainers understand very well that the major learning vehicles, in their lines of work, are observation and practice, and that hints, tips and explanations need to be introduced into learners’ minds slowly and appropriately. Whatever is offered needs to be capable of being bound by learners into their gradually developing practical mastery. It must be tested against existing experience and incorporated into it, and this takes time. Coaching is, to draw on my earlier analogy, like making mayonnaise: you need to add advice, like oil, very sparingly. If you add too much, too quickly – if you are in a hurry – then the mind curdles, conceptual knowledge separates out from working knowledge, and you will be on the way to producing (or becoming) a pundit rather than a practitioner.

  The corollary of these results is that, when people find themselves in situations where learning by osmosis is what is called for, then they ought to learn better if they have given up trying to make conscious sense of it. If you have abandoned d-mode, it cannot get in the way. A recent experiment by Mark Coulson of the University of Middlesex suggests that this may well be so.5 He employed two variants of the ‘factory’ task, in one of which the relationship between the subjects’ responses and the system’s behaviour was fairly ‘logical’, and in the other of which it was not. In this second ‘illogical’ version, the system was programmed to respond in a way that depended on what the subject’s response had been one or two trials previously, rather than on the current trial – a relationship that does not make a great deal of intuitive sense. (This is somewhat analogous to the party game in which one person has to try to discover, by asking Yes/No questions, the nature of a ‘dream’ that everyone else has apparently agreed upon. Unbeknownst to the victim, the others respond Yes or No to her questions purely on the basis of whether her question ends in a vowel or a consonant. The fun comes from the fact that this rule starts to generate some fairly bizarre information about the ‘dream’, and that the more the victim tries to make rational sense of this information, the stranger the ‘dream’ becomes, and the less likely she is to discover the ‘trick’.)

  Similar studies have shown that the logical task is amenable to the d-mode approach, while the illogical task is not. As with the dream game, the correlations between question and answer in the illogical version are so obscure that the attempt to follow sensible lines of thought and construct reasonable hypotheses about what is going on is unlikely to uncover them. The only effective strategy is to try to observe what is happening with as few preconceptions as possible. Thus subjects should do better on the illogical task if they have somehow been persuaded to give up d-mode before they start. Conversely, if they have abandoned d-mode they should do worse on the logical version.

  Subjects in Coulson’s study took part in either the ‘logical’ or the ‘illogical’ version of the task. Their job, as before, was to learn, over a series of trials, to control the factory process. However, in each version, half of the subjects had been given some advance ‘training’ in which the behaviour of the computer was completely random – an experience which, Coulson reasoned, would have weakened their faith in d-mode, as no amount of clever thinking could reveal patterns where there were none to be found. While overall subjects took longer to learn the illogical than the logical version, the group who had had the prior ‘random’ experience learnt the illogical task faster than those who had not. Subjects who had had the random experience, on the other hand, learnt the logical version more slowly than those who had not. Coulson argued that the preliminary experience of grappling with the random version of the task induces a state of confusion, so that, when the main task comes along, subjects have dropped d-mode in favour of a learning by osmosis approach. If the main task is actually the illogical one, this puts you at an advantage. Your learning-by-osmosis is unimpeded by the intellect. But if the main task is one which is amenable to being figured out, then you are disadvantaged if you have abandoned d-mode. Whether to back the hare or the tortoise depends crucially on the nature of the situation. If it is complex, unfamiliar, or behaves unexpectedly, tortoise mind is the better bet. If it is a nice logical puzzle, try the hare brain first.

  There are indeed many cases in which d-mode is the right tool, and in which the hare clearly comes out the winner. Imagine that you have a regular chessboard, an 8 x 8 chequered square, and you cut out two diagonally opposite corner squares (leaving 62 squares, see Figure 3). You make up 31 domino-shaped bits of cardboard, each of which neatly covers two squares on the board. You give me the mutilated board and the oblong pieces of card and ask me if I can exactly cover the 62 squares using the 31 bits of board, without cutting, bending or overlapping them. What do I do?

  My first thought may be that of course I can do it – 31 dominoes, each covering two squares; 31x2 = 62; QED. Your quizzical look, however, strongly suggests that it is not quite that simple. So what I then do is start laying out the dominoes on the board . . . but every time I try it, I always seem to be left with an odd square on the opposite side of the board from the available piece of cardboard. As I am deep down convinced that it is possible, I keep shuffling the dominoes around hopefully; but finally have to confess that I do not seem to be able to find a solution. A large amount of time, and some emotional energy, are consumed.

  You then invite me to think about the colours of the squares . . . especially the ones that have been cut out. I, having implicitly decided that the colours are irrelevant to the problem, and therefore given them no thought, wonder what you are talking about. Then I realise that the two opposite corner squares must be the same colour, either both black or both white. If you have taken two white squares away, that means that there are 30 white ones and 32 black ones left – an unequal number. But each domino has to cover two adjacent squares, i.e. one black one and one white one. So for the puzzle to be soluble there has to be not just an even number of squares, but an equal number of blacks and whites. Obviously – now I come to think about it – it can’t be done. (Imagine a 2 x 2 board consisting of four squares; take away diagonally opposite squares and, by analogy, the answer is plain.) Some straightforward deliberation could have saved me time and trouble.

  Figure 3. The mutilated chessboard

  Pawel Lewicki’s group at the University of Tulsa have investigated a slightly different aspect of the relationship between know-how and knowledge: whether they automatically change together, or whether learning that affects one can leave the other unchanged. The researchers focused on one particular set of patterns that we have all been developing since babyhood, and in which we might be considered to be quite expert: those that associate how people look with how they are likely to react – their faces, most obviously, with their moods and personalities. Even if much of this knowledge is implicit, we should have developed some conscious self-knowledge about the interpersonal rules of thumb that we tend to use. Spectacle wearers are likely to be studious, for example. People who don’t make eye contact are shy or shifty. People whose eyes have large pupils are more warm and friendly than those with small pupils. People whose heads loll about in an alarming way are probably not Cambridge professors. Everyone has their idiosyncratic set of diagnostic features. We think we can recognise ‘sad eyes, ‘mean mouths’ or ‘business-like moustaches’.

  Lewicki first elicited from his subjects as many of these personal associations as they could give. He then asked them to look at a long succession of photographs of unfamiliar people, and to try to predict what their personalities were like. After each picture, they were given ‘feedback’ about how good their predictions were. Unbeknownst to the subjects, Lewicki had again ‘stacked the deck’, by determining the character which he attributed to each photographed
person on the basis of some subtle combination of facial features. As with the experiments of his which were discussed in Chapter 2, subjects gradually got significantly better at making the predictions, even though they had absolutely no conscious knowledge of any connection between the facial features and the supposed personalities.

  However, there is a new twist. Lewicki had rigged his character attributions so that, for each subject, some of the connections between face and personality were the exact opposite of the ones which they had told him they relied on in everyday life. So in order to learn the patterns in the experiment, they were required to go against their normal assumptions. What effect did this have, Lewicki asked, either on the speed with which the subjects learnt the experimental pattern, or on the strength of their pre-existing rules of thumb? Should the mismatch not slow down the new learning, and/or cause some shifting in what is known consciously? You would think so – if you make the commonsense assumption that people’s self-knowledge is an accurate reflection of the way they go about things.

  In fact, Lewicki found that the subjects’ pre-existing conscious beliefs a) had no effect on the speed or efficiency with which the contrary associations were learnt through experience; and b) were themselves unaffected by the unconscious learning that had taken place. The undermind is acquiring knowledge of which consciousness is unaware, and by which it is unchanged, and using it to influence the way people behave. Consequently a schism develops between what people think they know (about themselves), and the information that is unconsciously driving their perceptions and reactions. The views that they espouse about themselves, we might say, become at odds with the ones that their behaviour in fact embodies.

 

‹ Prev