Hare Brain, Tortoise Mind

Home > Other > Hare Brain, Tortoise Mind > Page 19
Hare Brain, Tortoise Mind Page 19

by Guy Claxton


  It is technically impossible to examine in a living brain how such large-scale, distributed collections of neurons come to be associated in the course of everyday kinds of learning. However, it is possible to write computer programs that simulate the properties of neurons, and explore the learning that relatively small numbers of such artificial neurons can accomplish. It turns out that these so-called ‘neural networks’ are remarkably intelligent. They can, for instance, mimic very closely the kinds of learning that we discussed in Chapters 2 and 3, where complex sensory patterns are picked up and transformed into expertise, without any conscious comprehension or explanation of what has been learnt.

  Take as an example the problem of using echo-sounding equipment such as Asdic and sonar to detect mines at sea. The need to discriminate between underwater rocks and sunken mines is obviously a pressing and practical one, both during naval warfare and in the clean-up operations that follow it. Yet it is not an easy problem to solve, for a number of reasons. Echoes from the two types of object can be indistinguishable to the casual ear. And the variations within each class are massive – both rocks and mines come in a wide variety of sizes, shapes, materials and orientations – much greater, apparently, than the differences between the two. If there are any consistent distinctions, they are almost certainly not to be made in terms of single features, such as the strength of a signal at a specific frequency, but will involve a variety of patterns and combinations of such features.

  Suppose we were to analyse any particular echo into thirteen frequency bands, and to measure the amplitude of the signal in each of these bands. Call the bands A, B and so on up to M; and say, for the sake of argument, that the signal strength in each band could range from 0 to 10. It is unlikely that any one of these bands on its own would provide us with a decisive fingerprint. The solution to the problem of discrimination is not as simple as saying that all rocks score more than 7 in band H and all mines score less than 7. It is not even as simple as saying ‘If the echo came from a rock, then its strength in band C will be between two and three times its strength in band J.’ The only kind of pattern that might conceivably distinguish rocks from mines will be something like: ‘An echo probably comes from a mine if either the total value on bands A, D and L exceeds the product of the value on bands E and F by more than a factor of six, at the same time as H minus K is less than half of J divided by B; or the total of G, H, K and L is more than 3.5 times the total of A, B and C divided by the difference between I and M.’ To make the discrimination successfully, if it can be done at all, will require the detection of patterns of this degree of complexity, ones that are hard to describe, let alone to discover.

  In fact, human operators can become quite accurate at making these judgements, though, like the subjects in the ‘learning by osmosis’ experiments, they cannot articulate what it is they know. (You may remember the finely tuned ‘intuition’ of the sonar operator in the film The Hunt for Red October.) Nevertheless, human beings are less than perfect at it, and mistakes are potentially costly. To learn to discriminate between rocks and mines poses an interesting real-life challenge for a simulated brain.

  A neural network comprising only twenty-two different ‘neurons’ has learnt to perform this discrimination surprisingly well. The neurons are arranged together in three ‘layers’ (as shown in Figure 9). The first layer of thirteen ‘sensory’ neurons corresponds to the thirteen frequency bands into which the sound spectrum of the echo is divided. They are tuned to detect the signal strength within their particular band, and to emit a Signal, like a real neuron’s burst of action potentials, that is proportional to this strength. All of these sensory neurons send their signals to each of the seven neurons that are arranged in the next layer. And each of these seven starts out by sending a copy of its output to each of two units in the final layer, the output of one of which corresponds to the decision ‘It is a rock’ and the output of the other signals that ‘It is a mine’. This simplified brain is not able to grow any more connections, but it is able to adjust the selective sensitivity of every neuron to each of the inputs that it receives, in exactly the way that real nerve cells do.

  Figure 9. A simple neural net for distinguishing rocks from mines

  The ‘job’ of the network is gradually to adjust these sensitivities, in the light of experience, so that the flow of activity through the connections reliably activates the ‘rock’ neuron whenever it is given a rock echo, and the ‘mine’ neuron when it is given a mine. Neither the programmer, nor certainly the computer, knows at the outset what the requisite sensitivities are, nor even whether a set of sensitivities that will solve the problem actually exists. The best the programmer can do is to get a large and varied set of genuine sonar echoes which she knows arose from either a rock or a mine, and to feed these, one by one, into the network, telling it, after it has generated a decision, whether it was correct or not. In this ‘training phase’ the computer is given some relatively simple ‘learning rules’ which tell it how in general to adjust the sensitivities of the neurons as a function of its success or failure. For example, the network may be programmed to adjust all the sensitivities after each trial on the basis of their history of being associated with a correct response. Those units that have a better ‘track record’ are adjusted very little; those that have a poorer track record are adjusted by a larger amount. Finally, after the ‘brain’ has been given a large number of such feedback sessions, where it is ‘told’ whether it was right or wrong, it can be tested with a new set of echoes it has never met before to see what judgement it now makes.

  In this example the network behaves exactly like the human subjects in the ‘learning by osmosis’ experiments. Simple neural networks turn out to be excellent models for this kind of learning. The network starts out ‘guessing’ and making many mistakes; but gradually its performance improves until finally it is capable of distinguishing quite accurately between rock and mine echoes which it has never heard before. These simulations show convincingly that brains can do what people do; that is, detect intricate, unverbalised patterns that are embedded within a wide range of seemingly diverse experiences, and use these to guide skilful action. Neither the real-life human being nor the artificial brain ‘knows’ what it is doing, nor on what basis it is doing it. Their ‘knowledge’ – successful, sophisticated knowledge – is contained in small adjustments in the way the neurons of the brain respond to each other; adjustments which simply direct the flow of activation along different channels, and combine it in different ways. All the brain needs is a diet of training experiences, some feedback, and clear, unpremeditated, unpressurised attention to what is happening; its intrinsic operating characteristics will do the rest.

  It is worth noting that, in the rocks and mines simulation, the artificial brain came to make the discrimination with a degree of accuracy that even surpassed that of experienced human sonar operators on a long tour of submarine mine-sweeping duty. The neural network, despite its simplicity, outclasses a human expert – not because the computer is ‘cleverer’, but just because we have not been equipped by evolution with ears sensitive enough to divide sonar echoes into so many frequency bands. We might confidently suppose that if the same kind of problem were to use, instead of a range of metallic ‘pings’, human babies’ cries denoting either ‘hunger’ or ‘wind’, mothers would outperform the computer comfortably. Conversely we might expect to find that a dolphin could be trained to beat both the computer and the human operator on the rocks and mines problem.

  The imperfect performance of human beings reminds us that there are, of course, limits to the complexity that the unencumbered brain can handle. The world must contain many subtle contingencies that even the fine tuning of the human brain cannot pick up – especially those that have not in the past been directly relevant to survival, or which embody new technological, pharmaceutical or sociological patterns which the biological receiving apparatus was not designed to detect. And also there are many situations whic
h we might like to master where there simply is no useful information, no pattern, to be picked up. But what is clear is that the fundamental design specification of the unconscious neural biocomputer enables it to find, record and use information that is of a degree of subtlety greater than we can talk or think about. If we let our view of the mind as predominantly conscious and deliberate blind us to the value, or even the existence, of unconscious ways of knowing, we are the poorer, the stupider, for it.

  The brain works by routing activity from neural cluster to neural cluster according to the pattern of channels and sensitivities that exist at any moment, and that is all it does. Just as a pebble thrown into a pond starts an outward movement of concentric ripples, so activity in one area of the brain forms what Oxford neuroscientist Susan Greenfield calls an ‘epicentre’ from which activity spreads out, interacting with other flows of activation, and triggering new epicentres, as it goes. One can literally watch it happening. Studies by Frostig, Grinvald and their colleagues in Israel have used special dyes that can be introduced into cortical neurons, and which fluoresce when the cell becomes electrically active. If a spot of light is flashed into an animal’s eye, a neuronal cluster can be seen to form instantaneously and may double its size in a matter of 10 milliseconds. After 300 milliseconds there may be a very large group of active cells distributed over a wide area.

  The distributed nature of the neural clusters has been demonstrated by Wolf Singer in Germany. Singer has found that neurons that are widely separated across the visual cortex can nevertheless synchronise their firing patterns in response to a stimulus. Thus, as I have already suggested, the flow of activity is not literally from place to place, but between distributed patterns that continuously segue into one another. The brainscape is, as I have argued, delineated functionally, not physically. If we were able to track the brain’s activity, and simplify it, we should see something that looked not like a brightly lit train travelling at night, but more like a luminous kaleidoscope being continually shaken. But to show these iridescent patterns shimmering across the brain is beyond our technical capacity, not least because they move so fast. Ad Aersten and George Gerstein have shown that neuronal groups are highly dynamic, forming and reforming within periods as short as a few dozen milliseconds. And what is more, the same neuron may take part in different patterns from moment to moment. Despite the huge technological problems, there is already some direct evidence for the existence and the properties of these neural patterns.6

  As well as long-term, ‘structural’ changes in the brain, there is a variety of shorter-term influences on its responsiveness as well. The topographical ‘erosion’ of the brainscape is heavily modulated by much more transient influences. Brain responses are affected by the state of need, for example. If an animal is hungry, thirsty, sexually aroused or under threat, groups of neurons tend to adopt the same pattern of firing – to work in synchrony, in other words – more than when the animal is relaxed and sated. Heightened arousal seems to encourage groups of neurons to bind more tightly together into functional teams, and this, Susan Greenfield argues, has a number of interesting consequences, in addition to making each such group more excitable.

  Given that neurons are linked together by both excitatory and inhibitory connections, increased arousal can have a mixed effect, causing some neighbours of an active cell to fire more readily while effectively suppressing others. In particular, there tends to be what is called reciprocal inhibition between a group of neurons that is currently active, and others that lie outside the group, and the extent of this inhibition makes for a more or less competitive relationship between different centres of activity. When one cluster is creating a strong inhibitory surround, it will tend to suppress other potential epicentres, and at the same time it also tends to sharpen the borders of its own pattern. Instead of priming a broad range of its associates, to varying extents, inhibition makes for a clearer cut-off, and the neural repercussions of any centre of activation thus become more limited. In the presence of a drug – bicuculline – that is known to block mutual inhibition between neurons, a pool of activity can be enlarged as much as tenfold. Thus when arousal is lower, several different centres may be activated simultaneously, because the competition is less fierce; and at the same time patterns of activation that started from different centres can flow into each other like watercolours on wet paper. And from these effects, Greenfield argues, may arise a third consequence of arousal: precisely because of the greater competition, any current ‘winner’ is more unstable, more likely to be toppled by the next emerging epicentre at any moment, and thus the train of thought may gather greater speed.

  We know some of the chemical mechanisms that underlie this ‘neuromodulation’ effect. The brain stem, the oldest part of the brain, forms a bulge at the top of the spinal cord, and from it bundles of neurons project up into the midbrain, and thence into the cortex. It is these neurons that underlie the role of need, mood and arousal in varying the way cortical neurons behave. They can release chemicals called amines into the cellular milieu which make synapses transiently more or less sensitive. These amines include serotonin, acetylcholine, dopamine, norepinephrine and histamine. Acetylcholine, for example, inhibits one of the normal ‘braking’ mechanisms that causes a neuron to turn itself off after it has been active for a while. In general, neurons and neural clusters can be primed or sensitised by the influence of amines, so that they are more on a ‘hair trigger’.

  The dynamics of the brain can therefore vary in a number of different ways. The direction of activity flow is influenced both by the sensitivity of the long-term connections between cells, but also by the extent to which different areas are primed. A weak pathway may be temporarily boosted to the point where it is preferred over one that is normally stronger – and thus the ‘points’ can be switched so that the train of activation is diverted on to a less familiar branch-line. The breadth or degree of focus of activation of a concept may be reduced or expanded, so that a familiar conceptual hollow can be made to function as if it were either more or less clearly delineated – as more stereotyped, or more flexible, than its underlying set of structural interconnections would suggest. In one mood, a pattern may have boundaries that are clear and sharp; in another, its influence may spread more widely and taper off more gradually. The number and variety of different epicentres that can be simultaneously active also vary. In a state of high arousal, a single chain of associations that is more conscious and more conventional will tend to be followed. In a state of relaxation, activity may ripple out simultaneously from a range of different centres, combining in less predictable ways. And finally the rate of flow can vary. In a state of low arousal, a weak pool of activity may remain in one area of the network for some time before it moves on or is superseded. Under greater arousal – when threatened or highly motivated – activity may flow more rapidly from concept to concept, idea to idea.

  CHAPTER 10

  The Point of Consciousness

  In the beginning, man was not yet aware of anything but transitory sensations, presumably not even of himself. His unconscious brain-mind did all the work. Everything man did was without understanding.

  Lancelot Lava Whyte

  Interesting intuitions occur as a result of thinking that is low-focus, capable of making associations between ideas that may be structurally remote from each other in the brainscape. Creativity develops out of a chance observation or a seed of an idea that is given time to germinate. The ability of the brain to allow activation to spread slowly outwards from one centre of activity, meeting and mingling with others, at intensities that may produce only a dim, diffuse quality of consciousness, seems to be exactly what is required.

  There is direct evidence that creativity is associated with a state of low-focus neural activity. Colin Martindale at the University of Maine has monitored cortical arousal with an encephalograph, or EEG, in which electrodes attached to the scalp register the overall level and type of activity in the brain. When pe
ople are more aroused, these ‘brainwaves’ are of a higher frequency, and are more random, more ‘desynchronised’. When they are relaxed (but still awake), their brainwaves are slower and more synchronised: the so-called ‘alpha’ and ‘theta’ waves. Martindale recorded the EEGs of people taking either an intelligence test – one that required analytical thought – or a creativity test – one which asked people to discover a remote associate that linked apparently disparate items, or to generate a wide range of unusual responses to a question such as ‘What could you use an old newspaper for?’ Using a standardised questionnaire, the subjects had first been divided into those who were generally creative and those who were not. Cortical arousal was seen to increase equally for both groups when they were taking the intelligence test, relative to a relaxed baseline. When subjects were working on the creativity test, the EEG of the uncreative subjects was the same as for the intelligence test; but the arousal level of the creative people was lower even than their baseline control readings.

  In a follow-up study, Martindale divided the creative task into two phases: one in which people were required to think up a fantasy story, and a second in which they wrote it down. He argued that the first stage, which he called the phase of ‘inspiration’, would rely on creative intuition, while the second, the ‘elaboration’ phase, would involve a more conscious, focused attempt to work out the implications of the storyline and arrange them into some coherent sequence. As predicted, those subjects who were judged to be less creative showed the same high level of arousal in both phases, while the creative subjects showed low arousal during inspiration, and high during the elaboration. In Chapter 6 I argued that the productive use of intuition required a variable focus of attention; the ability to move between the concentrated, articulated processes of d-mode and a broader, dimmer, less controlled form of awareness. Martindale’s results show that this fluidity is mirrored in the physiological functioning of the brain. Creative people are those who are able to relax and ‘let the brain take the strain’.1

 

‹ Prev