Tomorrow's People

Home > Other > Tomorrow's People > Page 27
Tomorrow's People Page 27

by Susan Greenfield


  So let's push on down to the more fine-grained level of neurons and neuronal networks. In the laboratory using indirect and invasive techniques we can readily study, as those interested in learning and memory do, how networks of brain cells respond to outside events, to experience; yet even then we will not be addressing the central question of the generation of consciousness. Nonetheless, my own view is that it is at this level, of transient assemblies of neurons, that there is most opportunity for eventually finding out more about the physical basis of consciousness. After all, we have seen that genes merely and intermittently make proteins, and as such contribute as a tool to the material fabric of the brain rather than setting the whole agenda. And just as there is no ‘gene for’ a particular mental function – least of all consciousness – so there is no ‘transmitter for’ a feeling such as happiness, nor a brain region that acts independently as a mini-brain. And if there were, we would be no wiser in elucidating the hard problem – we would only have miniaturized it.

  The main difficulty that we as scientists have with consciousness is that quintessential subjective quality of first-hand personal experience that makes it so hard to pin down in the physical brain. Such an elusive, purely qualitative phenomenon is, of course, not tractable to the way of things in a lab. One of the most dreary-sounding descriptions of a scientist, which may well have turned off many a schoolchild, is that we are in the business of ‘measuring things’. Yet far from simply peering at read-outs of different data about solids, liquids and gases, what we are really dealing with are phenomena or reactions or processes that vary in degree from one situation to another, for one reason or another. What if consciousness was not all or none, not some ineffable, magic quality of the brain, but something that also varied in degree?

  Scientists of the future will probably need to search therefore for something in the brain that reflects its holistic mode of operation, at all levels from the evolutionary through to the quantum mechanical; moreover, this something will not only be describable in both macro-and also microscale brain processes but will also visibly contract or expand to correlate with varying degrees of consciousness. We can, then, think of consciousness as a phenomenon that deepens or lightens, expands or contracts, is more or less from one moment to the next; it would be a phenomenon that is essentially variable and ranging in quantity from the here and now, the ‘booming, buzzing confusion’ of an infant or the flimsiness of a dream or a drunken moment to the deep self-consciousness of introspection of the adult human. We could then see how such ever-changing levels of consciousness match up with an appropriately changing landscape in the brain. But what might the something be, that we could measure, that was ever changing in the brain?

  The best candidate, capable of such mercurial fluctuation, is not a brain region, nor a gene, nor even a particular chemical but rather ever-dynamic assemblies of neurons. We saw earlier that brain cells can muster and disband in their tens of millions over a fraction of a second. Perhaps in the future there might be brain-imaging techniques that could capture in a split second the precise formation and subsequent break-up of large-scale working assemblies of neurons throughout the brain. Only then will scientists be able to see how good a physical correlate of consciousness such assemblies actually are and only then – finally – will they be able to explore the mechanics of their formation as a critical step towards understanding how the brain generates the subjective state.

  At this fine-grained level, the mathematician Roger Penrose, together with the anaesthetist Stuart Hameroff and neurochemist Nancy Woolf, have attempted to work out how such assemblies might form to give the type of flash-flood coherence that could cater for a moment of consciousness. This particular approach uses the principles of quantum mechanics to show that a network of neurons could work as one ‘hyperneuron’, each neuron resonating with all the rest in a kind of collective neuronal operation referred to as ‘quantum coherence’.

  The immediate problem with this strategy is that quantum events are usually possible only in very cold environments as we saw for the development of quantum computers. However, many physicists suggest that the brain might be a special case with a special means of isolating and protecting crucial quantum events against the backdrop of our very hot heads. A more crucial and still unresolved issue is how exactly quantum events could explain how tens of millions of brain cells work together for a fraction of a second using their delicate branches to receive and send electrical and chemical signals. As it stands, all quantum-level attempts at describing this neuronal coherence could just as easily be applied to any group of cells exposed to pulses of chemicals, in the heart or kidney or in virtually any other body location. And yet hearts and kidneys are not as relevant to consciousness as brains are. The scientists of the future who are working on consciousness will have to discover what additional constraining conditions there are in brains that are not in hearts. There must be something unique about assemblies of neurons forming in the brain that needs more than microtubules and the ability to synchronize.

  Moreover, were someone to come up with a quantum mechanical explanation integrated with macrophysiology it is vital to remember that, even though assemblies of neurons might well be a faithful index of degree of consciousness, in themselves such an assembly of synchronized cells will not be intrinsically capable of generating an inner state; on its own, an index of a process is not the same as the process itself. I would not mind betting serious sums of money although it could not actually be proved of course that a brain slice kept alive in a dish is definitely not conscious. It is not simply because of the paltry numbers of neurons available. A mollusc, for example, may well be conscious to some degree; the holistic organization of its nervous system is intact, and also connected still to the rest of its body. In a brain slice, however, much of the three-dimensional connectivity has been destroyed, and it is deprived of any inputs from or outputs to the rest of the organism.

  The two situations are therefore very different: in the brain slice, a neuronal assembly – if it is intact at all – has been isolated from the biological system with which it interacts, whilst in the mollusc it would be operating in its natural context. In the former the assembly would therefore be an index with no read-out to any larger encompassing organization, whilst in the mollusc the assembly size could interface with the rest of the body, which would be intact and as cohesive as normal: in ways that are still a mystery there would be a modicum of consciousness. Future scientists would have to question how a changing network of neurons in a brain can read out the prevailing configuration, the neuronal landscape, via streams of peptides continuously reporting to the immune and endocrine systems, to the heart and lungs and kidneys, thereby orchestrating a cohesive state throughout the whole body.

  In the future it may be possible to image such assembly formation in real time within the brain, and to monitor the concentrations of different peptides in the body at different times; it may even be possible to test the theory that such assemblies are indeed an index of different degrees of consciousness – but it still does not follow that we will be able to solve the hard problem of how the physical brain has triggered a state that translates into a certain subjective experience.

  Even from this brief overview of the levels of brain organization and function over which scientists will need to range impartially we can see that a huge block to advancing our understanding of consciousness has to be the great diversity of approach, from the quantal through to the evolutionary. The really big difficulty is that different techniques and the very nature of such different types of investigation have driven the type of question that can be asked, and have provided respective ‘solutions’ that do not really address the hard problem itself. But in the future, neuroinformatics might come to the rescue, at least by preventing side issues specific to any one technique or discipline clouding the real, deep question. And if a new theory ensues, by garnering data across disciplines, then I suggest the following criteria as the minimum
kit by which the success of competing theories from different levels might be compared and evaluated:

  What is the question that the theory is actually addressing? If it is not the hard problem, or some variant, then the theory is not really getting to the nub of the issue.

  Can the theory explain how the consciousness of dreaming is both the same as and different from normal consciousness? There is no doubt that the consciousness of dreams is very different from that when we are awake, assuming of course that we are not on drugs and are not psychotic. Yet various markers like EEG patterns and protein-synthesis rates in dreaming are similar to those in wakefulness. So what physical substrate will reflect the difference in the subjective states?

  Can the theory explain how non-human animal consciousness is the same, and how it is different from human consciousness? (This question is a variant of 2.) Most would concede that most mammals are conscious, though some suggest drawing a line beyond which animals like the lobster are mere automata. But in the animal kingdom there is no corresponding clear-cut anatomical or physiological divide. In fact, the nervous system is designed along broadly common principles from the most primitive sea slug right through to primates.

  How does the theory differentiate self-consciousness, sub-consciousness and unconsciousness from consciousness? (Again, this issue follows from questions 2 and 3.) Though all non-human animals may be conscious, it seems a fair assumption that, like small infants, they do not enjoy the experience we call self-consciousness. And sometimes, in extreme situations of sport, dance, sex or drugs, we too can ‘let ourselves go’, abandon self-consciousness whilst still being conscious. Any good theory should be able to offer a description of what might be happening in the brain when you ‘blow your mind’.

  Does the theory attempt to describe how consciousness relates to the body as the boundary of self? If consciousness is generated in the brain, then a credible theory should be able to account for why we feel our bodies are the boundaries of ourselves. Although this issue might seem obvious, it will be critical in a far more mentally networked society; we will need to evaluate the dangers or absurdities of feeling part of a greater collective that breaches the fire wall of our sense of individuality.

  Can the theory explain how the same type of electrical signals arriving in different parts of the structurally more-or-less similar cortex translate so distinctly into experiences of sound, hearing and touch? Once more, this point is far less obvious than it might seem. To recap from Chapter 3, the outer layer of the brain, the cortex, has a similar neuronal architecture throughout; we know that when certain parts are active they match up with an experience of vision, other parts hearing, and so on. Although the answer might simply be that the difference comes from the different inputs to each zone of cortex, from the retina, cochlea and so on, the tricky riddle remains that those inputs all use the same system of electrical signalling. There is nothing intrinsic to the inputs to the visual or auditory cortex therefore that could match up easily to the difference in experience. If a theory could make progress on this point, it would be a big step towards understanding the causal relationship between brain events and subjective states.

  Can the theory explain how different drugs, such as morphine, LSD and amphetamine, produce different states of consciousness? If we accept that there is no subcomponent of the brain, be it anatomical structure, gene or chemical, that is not only necessary but actually sufficient for consciousness, then we need to understand how varying the availability of different chemicals by taking drugs can give rise to different types of consciousness. The drugs do something to chemical systems in the brain, which in turn change its holistic organization and way of operating to give rise to a shift in subjective sensation. Any theory that ignores this process, or cannot accommodate it, is not accounting with sufficient accuracy for how the brain generates the different ‘feel’ of different types of consciousness.

  Can the theory account for the effects of a placebo, and the psychological effects of certain peripherally acting drugs, such as the anti-hypertensive drug propanolol, as well as explaining the change in emotion caused by a cognitive stimulus, such as good or bad news? This question touches on the mechanism by which subjective states, generated by the brain, are triggered by indirect factors such as feedback from the rest of the body or from the outside world. There must be some intermediary change in neuronal networking that in turn influences how the brain will configure to generate the shift in consciousness. But as yet we have no idea how a succession of such events might unfold – nor even, indeed, what they are.

  In this particular theory, what salient feature of consciousness is being modelled, and what left out? A model involves extracting the salient feature of a system and disregarding everything else. We saw that, in the case of silicon models, if no one knew what to leave out in the first place, there would be a risk that the all-important factor for sufficiency, as opposed to mere necessity, would be omitted. If, to be on the safe side, everything were retained, then the model would not be a model.

  What is the experiment that would test the theory, and what would constitute a persuasive ‘solution’? This question is almost as hard to answer as the hard problem itself. As yet no one has put forward the type of explanation that they would expect if someone were to claim that they had some concept of how the physical brain gives you the inner world that only you can experience at first hand, and how it might be proved. More intriguing still is to assume that somehow the hard problem had been solved – what would be the consequences? True understanding would give the ability not just to monitor but also to manipulate. We would therefore be able, with deadly accuracy, to transform individual consciousness, hack into it and share it around.

  Perhaps, unlike all of us today, scientists working on consciousness in the future will be able to deal with this list of questions; but more important still is not whether they will be able to think of an appropriate model or hypothesis to explain this biggest of questions left to science but rather whether they will be able to design an experiment that could falsify it. After all, as the philosopher Karl Popper famously stated, the whole point of science is to be able to test, to falsify ideas with empirical data.

  The three big questions that Haldane posed, echoed by the cynic John Horgan, concern time and space, then matter, then our own bodies. They still stand as great challenges to the scientists of the future. If scientists as such still exist, and if human imagination and curiosity can survive the suffocation of continuous sensuality and easy access to anything in life, from a fact to a relationship, then that same awesome technology could be harnessed to stimulate new revolutions of ideas and, in turn, new technologies not just decades but centuries hence. Might someone in the foreseeable future be writing a book about the implications of the fact that space, time, matter and individual subjective experience no longer have any meaning? Of course neither such a book, nor such a someone, could exist – by definition – in such a scenario…

  But there remains Haldane's last big question, regarding ‘the dark and evil elements in [man's] own soul’. That great thinker of the 20th century Bertrand Russell foresaw the limitations of science in replacing religion, of not necessarily giving people what they wanted, and hence of contributing to, rather than alleviating, further strife. In 1924 he delivered a reply to Haldane's Daedalus:

  Science has not given men more self-control, more kindliness, or more power of discounting their passions in deciding upon a course of action. It has given communities more power to indulge their collective passions, but, by making society more organic, it has diminished the part played by private passions. Men's collective passions are mainly evil; far the strongest of them are hatred and rivalry directed towards other groups. Therefore at present all that gives men power to indulge their collective passions is bad. That is why science threatens to cause the destruction of our civilization…

  This paper was entitled Icarus, after Daedalus' son, who perished because his arrogance took
him, borne on his waxen wings, too close to the sun. Just how true is Russell's black prophecy? Will 21st-century science and technology deprive us of our ‘private’ passions, our individual free will, and what will happen to the human tendency for ‘evil’?

  8

  Terrorism: Shall we still have free will?

  You enter an office block and accept as routine that the receptionist will hand you a perforated form for the plastic pocket of a security badge. You check in at the airport and find nothing unusual or unreasonable in having to surrender nail scissors or remove your shoes for inspection; meanwhile the sight of an unattended bag in a crowded concourse immediately attracts attention and anxious enquiries. Such scenes, if they had been prophesied several decades ago, would have been met with incredulity. But now we are no longer really at peace; security is second nature. As I am writing this, the USA has been put on ‘Orange alert’, the second-highest level of national danger. Acts of terrorism, if sufficiently large-scale or frequent, could well put an end to civilized life as we know it – and certainly to the sophisticated high-tech existence that, for good or ill, is otherwise in store for us in the future.

  Yet surely the march of technology will eventually work against the narrowing of the mind that defines the heart-and-soul fanatic. The autonomy of surfing the internet and the enhanced passivity and hedonism that seems set to characterize the mid-21st-century lifestyle, not to mention the physical distancing away from any overly persuasive acquaintances that IT affords, should all augur well for individuals without any nationalist or extremist proclivities. In fact, in the cyber-world we could adopt the virtual guise of another race and gender, so other cultures and customs would not be so alien, and we would become ever more tolerant and broad-minded. On the other hand, those same technologies might fuel the strong feelings that tribal membership can engender, with an apocalyptic dimension more destructive than has ever before been possible. An important issue in 21st-century life will be the extent to which scientific advances will magnify the potential of tools of warfare, and whether that same scientific progress will, in addition, dispose the mindset of future generations even more strongly – as Bertrand Russell predicted – towards their deployment in acts of violence.

 

‹ Prev