Hare Brain, Tortoise Mind

Home > Other > Hare Brain, Tortoise Mind > Page 18
Hare Brain, Tortoise Mind Page 18

by Guy Claxton


  An experiment by Jerome Singer demonstrates how increasing the desire for a solution can lead to the coarsening of perception. He asked subjects to estimate the size of a square placed at some distance away from them down a long corridor, by selecting one from an array of different-sized squares arranged on a stand to one side of them. Although this looks simple enough, it is in fact quite a difficult task, precisely because there is so little information with which to judge the size of the distant square. All kinds of subtle cues, such as shadow, brightness and the visual texture of the square, might be helpful. So though the square occupies a very precise point in the visual field, subjects will benefit from having a wide beam of attention in terms of the range and kinds of cues to which they are attentive. It is the sort of task, in other words, which might prove sensitive to the effects of pressure. When subjects were required not just to make their judgement but to imagine that they had a bet riding on its accuracy, their performance deteriorated – even with an imaginary as opposed to a real stake. In another version of the study, subjects were asked to spend fifteen minutes, prior to the size test, working on an insoluble problem, with the experimenter feigning surprise and disappointment at their poor performance. This was sufficient to induce a mood of anxiety and frustration which, in its turn, coarsened the perception of the cues in the distance test, and caused a deterioration of performance.

  CHAPTER 9

  The Brains behind the Operation

  The Brain – is wider than the Sky -

  For – put them side by side -

  The one the other will contain

  With ease – and You – beside -

  Emily Dickinson1

  We now know a considerable amount about what the intelligent unconscious can do, and the conditions under which it works best; but it remains to discover what it is – how it is physically embodied – and exactly how it works: how it makes available the slower ways of knowing and the powers of subliminal perception. We know very clearly that there is a ‘brains’ behind the operation, but who, or what, is it?

  The brain – the three pounds of soft wrinkled tissue that occupies the skull – is the focus of intense research activity at the moment. The 1990s were designated the ‘Decade of the Brain’ by the US Congress. At the 1996 British Association for the Advancement of Science ‘Festival of Science’, the annual public showcase for the work of scientists in Britain and elsewhere, the two-day symposium on ‘Brains, Minds and Consciousness’ had to be moved from its original venue to the largest lecture theatre on the University of Birmingham campus in order to accommodate the audience. Scarcely a month goes by without the appearance of another book by one of the leading figures in the thriving new discipline of ‘cognitive neuroscience’. In trying to understand the physical substrate of the slower ways of knowing, the brain is clearly the most fruitful place to start. If we first explore what the brain does, or what it can plausibly be supposed to do, we shall then be able to see what, if anything, is ‘left over’, in need of explanation by some other means.

  The brain is one of three main systems that coordinate the workings of a whole animal. Together with the hormonal system and the immune system, the central nervous system, of which the brain is the headquarters, ensures that all the different limbs, senses and organs of the body act in concert.2 The brain integrates information from the eyes, ears, nose, tongue and skin with data about the state of the inner, physiological world, and, by referring this information to the stored records of past experience, is able to construct actions that respond as effectively as possible to the current situation. The brain assigns significance, determines priorities and settles competing claims on resources, for the common good. It ties together needs, as signalled from the interior, opportunities (and threats) in the environment, as flagged by the five senses, and capabilities, as represented by the programmes that control movement and response. And it is able to do this, in the case of human beings, with such consummate elegance and success because it remembers and learns from what has happened before.

  The brain is composed of two types of cells – glial cells and neurons – both in profusion. The glia seem to be mainly responsible for housekeeping: they mop up unwanted chemical waste, and make sure that the brain as a whole stays in optimal condition. But it is the neurons, approximately one hundred billion of them, that give the brain its immense processing power. Each neuron is like a minute tree with roots, branches – the dendrites – and a trunk called the axon. The neurons in the brain vary considerably in their actual size and shape. Some are straggly and leggy, with long axons running for some millimetres through the brain; others are short and bushy, with dense dendritic branches, but perhaps measuring only a few thousandths of a millimetre from end to end. However, all the neurons have the same function: they carry small bursts of electric current from one end to the other.

  The neurons are packed together into a dense jungle, and where their roots and branches touch (at junctions called synapses) they are able to stimulate one another. Electrical activity in one – what we might call the ‘upstream’ neuron – influences the likelihood that its ‘downstream’ neighbour will become electrically active in its turn. Normally each downstream neuron needs to collect the stimulation from a number of its upstream neighbours, until it has built up enough excitation of its own to exceed some ‘firing threshold’. When it fires, a train of impulses is initiated along its own axon and into its dendrites, where it can contribute to the excitation of other cells with which it is in contact. No one input will fire a downstream cell on its own, but each contributes to this general pool of activation, making the cell more or less disposed to fire in response to other inputs.

  The elaboration of the story of how the neural electrical impulses originate, are conveyed along the axon, and serve to activate other neurons is one of most notable successes of twentieth-century science, and it has been told in detail on many occasions. In brief, each neuron is covered with a semi-permeable membrane that is able to retain some kinds of chemicals within the body of the cell and keep others out. Many of these chemical particles carry a small electrical charge, either positive or negative, and the membrane, in its normal state, is selectively permeable to these ‘ions’, in such a way that it is able to maintain an electrical gradient, a potential difference, between the inside of the cell and the fluid which surrounds it. However, under the influence of other chemicals – neurotransmitters – which may be released into the ambient fluid, the character of the membrane changes so that charged ions are allowed to flow across it, and it is this flow which initiates the chain of events that may result in a burst of electricity, an action potential, travelling from one end of the neuron to the other.

  Action potentials occur spontaneously at more or less regular intervals: nerve cells are never completely quiet. Even when we are asleep their activity continues. But the pattern and frequency of firing can be dramatically altered by events at the synaptic junctions with other cells. A wave of electricity arriving at a synapse from an upstream cell causes neurotransmitters to be released into the gap between it and the downstream cell. These molecules float across the gap and attach themselves to receptors on the membrane on the downstream side, causing it, in its turn, to allow charged ions to flood into the cell and set off another action potential. The stimulation that one neuron gives to another can be inhibitory, making the next-door cell less likely to fire, as well as excitatory.

  Each cell may receive stimulation from up to 20,000 different sources, so the neuronal jungle as a whole is incredibly tightly and intricately interconnected. It is estimated that there are of the order of one million billion possible connections in the outer mantle, the cortex, of the brain alone. If we could lay the brain out neatly, we would see that on one ‘side’ there are all the incoming ‘calls’ from the inner and outer senses; on the other there are the outputs and commands to all the muscles and glands of the body; and in the middle is this vast tangle of living wires, immersed in a complex,
continually changing bath of chemicals, integrating and channelling messages from one side of the network to the other, with many, many loops and diversions on the way.

  Figure 8. A stylised neuron (reproduced with permission from Scientific American)

  The electrical communication between neurons can be changed permanently as a result of experience. Cells that were originally strangers, and relatively ‘deaf’ to each other, can develop close associations, as a result of which one only has to whisper to attract the other’s attention where before it had to shout. One way in which experience affects the long-term flow of neural communication is through physical dendritic growth. It has been shown that animals who live in environments full of rich stimulation develop much bushier neurons than do those whose worlds are dull and monotonous. The total number of synapses can increase. But synapses are also capable of becoming easier for an impulse to cross – and for this we need a different process: ‘long-term potentiation’ (LTP).

  When neurotransmitters are released into the synaptic channel between the upstream (currently active) and downstream (currently inactive) neurons, some of the pores in the downstream membrane open up easily to let the charged ions across. However, others, the so-called NMDA (N-methyl-d-aspartate) receptor sites, start out by being more tightly constricted, and will only open up if they are subjected to stimulation that is strong and long-lasting. But once they have been opened, they take less persuading on subsequent occasions. For some time after they have been subjected to strong stimulation, the NMDA pores will respond to a much weaker signal. This is one of the fundamental mechanisms that allows the brain to learn.3 As one of the pioneers of brain research, Donald Hebb, wrote in his seminal book The Organization of Behavior in 1949, ‘When an axon of cell A is near enough to excite a cell B and repeatedly and persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased.’4 Or, more informally, ‘cells that “glow” together, grow together’.

  An important characteristic of LTP is its specificity. Though one neuron receives inputs from a myriad of other cells, if, through LTP, it becomes more responsive to one particular upstream neighbour, it does not become indiscriminately more intimate with the others. Thus there are mechanisms in the brain which enable specific paths of facilitation to be developed between groups of neurons. When a human baby is born, there are certain genetically determined frameworks that serve to impose a general structure on the brain, but much remains unfixed. The brain is like a vast lecture theatre of students on their first day at university: full of potential friendship, but as yet strangers to each other. After a few weeks, however, each student has begun to belong to a number of different evolving circles of acquaintance: study groups, sports teams, neighbourhoods and clubs. Just so, every neuron in the brain comes to belong to a variety of developing clusters, each of which is bound together in such a way that stimulation of one member, or a small group of members, is likely to lead to the ‘recruitment’ of the others. And the major reason why they tend to become associated is simply that they are active at the same time. Communication becomes more selective, and information begins to flow, throughout the neural community as a whole, along more stable channels.

  In order to begin to make links between our understanding of mind and brain, we need to think in terms of the behaviour of these clusters or assemblies of neurons, not of the individual nerve cells themselves. Though we need the molecular level to help us understand how assemblies of cells are formed, and how they behave, there are properties of large assemblies of neurons that are not reducible to their biochemistry – just as the way a hockey team functions cannot be understood from even the most detailed observation of a single player. Nor could the behaviour of an individual student in the context of her hockey practice be predicted or explained on the basis of her performance in the chemistry laboratory, or the college bar. Though there is some direct evidence about how assemblies of neurons behave, it is much more difficult to gather, and still relatively sparse compared with what we know about individual cells. We have to reach out beyond established fact into the realm of plausible hypothesis; to use what we know as a basis on which to build more holistic images of what the brain is like and how it works.

  Suppose you are shown dozens of photographs of the same person, Jane, in different moods, wearing different clothes, in different company, engaged in different activities. Some of Jane’s features will stay the same across all the photos: the colour of her eyes, the shape of her nose, and more hard-to-pin-down constellations of the way her features are put together. You may not be able to say what it is about the pattern of Jane’s face, but after a while you would recognise her anywhere. The neural clusters that correspond to these ‘core features’, those that recur every time you see Jane, will always be co-active, and it is they that will therefore bond most tightly together. They become the nub of your concept of Jane. Others of her features, such as her ready smile or her penchant for floppy hats, while not critical, become closely associated with her representation, so that, in the absence of any definite information to the contrary, you may automatically fill in these default characteristics when you think of Jane.5

  More loosely associated still, there are a whole variety of features and associations that have been connected with Jane, but which are less characteristic or diagnostic of her: the time she was photographed with chocolate ice cream all round her mouth; the scarlet suit she wore to Tim and Felicity’s wedding. These memories, we might say, form a neural penumbra that is activated a little when we are reminded of Jane, but not – unless the context demands it – strongly enough for them to fire in their own right. Thus the overall neural representation of Jane is not clear-cut. It is composed of an ill-defined, fuzzy collection of features and associations, some of which are bound very tightly together at the functional centre of her neural assembly, while others are more loosely affiliated, and may on any particular occasion form part of the activated neural image of Jane, or may not. Some of these features, whether core, default or incidental, may be distinguishable, or nameable, in their own right – ‘nose’, ‘smile’ – while others may comprise patterns that are not so easy to dissect out and articulate.

  Depicting the concept of ‘Jane’ as a vast collection of more or less tightly interwoven neurons does not commit us to supposing that there is a single ‘Jane’ neuron anywhere in the brain to which all the others lead, or even that the ‘extended family’ of neurons must all be found in the same location within the brain. There is plenty of evidence that such families of neurons can be – indeed usually are – widely distributed across the brain as a whole. Even if we focus only on the world of sight (and concepts are generally multi-sensory) we find that different aspects of vision – colour, motion, size, spatial location – are processed in quite widely separated areas. Current estimates suggest that there are at least thirty to forty discrete areas of visual processing in the brain, and that these different systems of neurons are themselves interconnected in highly intricate ways. Add in the other senses, as well as memory, planning and emotion, and there will be traces of ‘Jane’ in every corner of the brain. Just as, in the modern world, geographical proximity is only loosely indicative of the strength of people’s relationships, so the intimacy between neurons is reflected in their functional, and not necessarily their physical, closeness.

  Since before birth, experience has been constantly binding tire brain’s neurons together into functional groupings which act so as to attract and ‘capture’ the flow of neural activation. And these centres of activity in their turn become strung together to form pathways along which neural activation will preferentially travel, so that the brain as a whole develops a kind of functional topography. To explore the consequences of this idea, we might imagine that a concept like ‘Jane’ or ‘cat’ or ‘student’ forms an activation ‘depression’ towards which neural activity in the vicinity will
tend to be drawn, as water finds its way into a hollow. Experience wears away bowls and troughs in the brain which come to form ‘paths of least resistance’, into and along which neural activity will tend to flow.

  At the bottom of a hollow are the attributes that are most characteristic of its concept: those by which, whether we know it or not, the concept is recognised. On the sides of the valley lie the default properties, and higher up are those associations that are optional or incidental. Experience erodes and moulds the mass of neurons into a three-dimensional ‘brainscape’ where the ‘vertical’ dimension indicates the degree of functional interconnectedness, the mutual sensitivity and responsivity, of the neurons in that conceptual locality. The deeper the dip, the more tightly bound together the neurons; the more ‘deeply engrained’, we might say, that concept, that way of segmenting reality, is. But hollows vary in their steepness of incline as well as their depth and their size. Valleys that have steep sides are those where the concept being mapped is well defined: it has relatively few associations that are not criterial. Gentle slopes indicate a wider range of looser connections and connotations.

 

‹ Prev