Tomorrow's People

Home > Other > Tomorrow's People > Page 9
Tomorrow's People Page 9

by Susan Greenfield


  Invasive procedures in the future will inevitably, however, go more than skin deep and will have the potential, at least, to exert still wider influence – for example in one particularly delicate sphere of our lives. One day a neurosurgeon in North Carolina was conducting a pain-relief operation when he misplaced the electrodes in the spine of his patient. As he passed electric current through the electrodes she had an orgasm instead. The point of this story is that science can provide a precise means of manipulating our sexual sensations, and will be able to do so with increasing accuracy and precision in the future. Then again, few would think that spinal surgery was an ideal way to go in quest of a climax.

  Joel Stein of Time magazine dreams of the ‘Holy Grail’, a machine that delivers a virtual experience so real that it's indistinguishable from actual sex, other than the fact that it is never disappointing. He points to the sci-fi prototypes – the ‘Pleasure Organ’ in Barbarella, and the ‘Orgasmatron’ in Sleeper. So far, perhaps unsurprisingly, Stein describes his own experience with devices for genitals involving only artificial disembodied organs and a link to an e-date hooker as ‘repulsive’. Meanwhile, other attempts to develop a ‘sexperience’ over the net, via a bodysuit, have made little progress over the last decade; clearly, such ideas are in the more remote future.

  Still, the Orgasmatron is a very real possibility. Using high bandwidth communication, lovers could feel as though they were making intimate contact without actually touching. Virtual sex will apparently be possible within a few decades, whereby a 55-year-old man could have virtual sex as a 29-year-old woman, or an online orgy with as much of humanity as he wanted. This experience need not involve a bodysuit, simply connection to the net. Along the same lines, Ray Kurzweil promises that in the future we will all be able ‘to have sex with whoever we want, at any time or place we want, at whatever age we want’. In the cyber-world at least the ‘unisex’ model for male-female relations would be feasible.

  In any event, we should not be too surprised if our sex lives have an increasingly virtual flavour. ‘Every single technology has had a sexual consequence,’ says James Petersen, one-time Playboy editor. Telephones, air shuttles and now online pornography can all be used as a means for, or a source of, new types of sexual experiences. We can already access much interactive sex experience, including sex chats and live videos. Perhaps Ray Kurzweil's vision of high-tech bodysuits for sexual fantasies, which we would never pursue in the real world, is not to be dismissed lightly.

  But the tricky issue is over what we actually feel beyond the instant of orgasm. Further to satisfying a very powerful drive, for many sex meets a still greater human need. And few would deny those other emotions arising from our bond with partners that, for most of us, place a real sexual relationship at a premium over one-night stands and masturbation. These wider, gentler and more complex feelings, beyond the flash flood of orgasm, could be exemplified in the kiss. Kissing makes you feel warm and connected: you are exchanging breath with another, and thus become one. It is no coincidence that prostitutes prepared to sell sex will usually not kiss their clients; somehow it's too intimate.

  So how will the kiss fare in the future? A kissing machine hooked up to a computer is as hard to imagine as it would be sad to experience. But surely the biggest question of all is whether there will still be a human need for such activity. We could instead all end up as though autistic, unable to empathize with anyone else, locked into a remote and numbing isolation, or at the very best trapped in a speedy, giggly cycle of endless cyber-flirting, with deeper needs and pleasures lost to us for ever. This scenario raises the big question whether generations in the future will have the same emotional needs as those of us born in the previous century. Is there really such a thing as human nature, which even the high-tech world cannot obliterate, the needs of which cannot be met by any manner or amount of mind-blowing IT? We cannot afford to be too complacent. Our minds are two-way streets. Just as we can ponder on how we will view new technologies, so those new technologies will impact on how we view the world.

  One possibility is that silicon implants in the brain could modify directly how we think. Already such implants are showing enormous potential for improving the quality of life of paralysed patients. At Emory University, neuroscientist Philip Kennedy and neurosurgeon Roy Bakay have implanted an electrode into the outer layer of the brain, the cortex, in a zone related to generation of movement. But this electrode is far from being just a sliver of metal; instead, it is made of glass containing a solution of trophic factors – proteins that help neurons to grow, and which act as a powerful target for attracting that growth towards the location of the implant. Clusters of neurons in the brain therefore slowly converge on the electrode, and after only several weeks form contact with it. Recording wires inside the glass electrode can now pick up signals from the neurons, and transmit those signals on through the skin to a receiver and amplifier outside the scalp. These devices are, in turn, powered by an induction coil over the scalp. Amazingly, after training, the completely paralysed patient can now ‘will’ a cursor to move and stop on a computer screen.

  Rats too are capable of such psychokinesis. John Chapin, of the Hahnemann School of Medicine in Philadelphia, has trained rats to press a lever. The patterns of brain activity are analysed whilst the rats are performing the task, via electrodes in the brain. These same patterns of activity are then fed into a computer, which controls a robot arm. The ‘robo-rats’, in some cases, learn that now they no longer need to press the lever but can instead ‘will’ an action – set in train the pattern of mental activity that will control the robot arm – to press the lever for a reward of water. At the moment the movement of the robot arm itself has to be kept simple, although the ultimate goal is movement in three dimensions. In any event, it is all a long way off from being widely applied in humans: as well as needing some system that was more stable and safe, a far more sophisticated array would also be necessary.

  Both the neurotrophic electrode and the multi-electrode array may offer real hope to those who cannot move easily, having suffered either a stroke or damage to the spinal cord; but some fear that such technology could lead to implants even for those who are not paralysed – just like the robo-rats. Once this taboo had been overcome, the brain would be as open and as vulnerable to invasion by junk inputs as our email inboxes are now. But even the horror of these wilder scenarios pales by comparison when we think of the implications of an internet connection directly to the brain, apparently possible within the next twenty-five years or so: ‘Imagine that you could understand any language, remember every joke, solve any equation, get the latest news, balance your checkbook, communicate with others and have near-instant access to any book ever published without ever having to leave the privacy of yourself,’ wrote Bran Ferren, of Walt Disney Imagineering, in a New York Times magazine article.

  But such a happy fantasy ignores the more probable nightmarish aspects. What if the kind of junk that comes unsolicited currently on the internet is force-fed directly into your neuronal circuitry? Invasive marketing is, after all, already here – records of how you have been using the net give a continuous stream of clues as to what product will tempt you. We might be facing the sinister prospect of companies eventually narrow-casting messages directly into consumers' brains, whilst the only way consumers will be able to fight back may be by treating the expropriation of human attention as a form of theft.

  These doom-laden imaginings need a pinch of salt. Setting aside the obvious precaution of not volunteering for a brain implant, even if the opportunity for psychokinesis was too valuable to pass over then direct implanting of thoughts would still not necessarily be feasible. First, there is a huge difference between finally triggering a movement and the complex thoughts that either do or do not lead up to such movements. Relatively few neurons might be necessary to grow into the neurotrophic electrode, or to be stimulated by a modest array, in order to set signals buzzing down the wires and into a robot a
rm. But the critical issue here is that the output of the brain is essentially convergent: many factors and different signals from different areas all converge to the final part of the cortex, the ‘motor cortex’, that will set in train a movement, the contraction of muscle.

  By contrast, inputs coming into the brain are divergent: visual signals entering through the retina, for example, are divided up by the brain into colour, form and motion, all of which are processed separately, in parallel – in the case of vision in over thirty different brain regions. Because such multiple systems are activated simultaneously, via a strategic portal such as the optic nerve, it is not obvious how an implant in one particular area of the brain ‘downstream’ could have similar widespread ramifications. This factor is particularly relevant when you want to impose not just a simple, single sense on the brain but a multisensory, complex, insubstantial thought such as the abstracted contents of an email.

  Mental control over the physical world remains highly possible, however. It may well be hard to imagine anyone healthy volunteering for the requisite brain surgery for electrode implantation, just as it is difficult to contemplate any society that could afford to implement it wholesale for the entire population. Yet non-invasive techniques for manipulating the outside world by thought alone have been in existence for quite some time. First there was the ‘alpha train’, a toy train powered by an electric current which was switched on only when the brain-recording from the scalp of a human subject, the electroencephalogram (EEG), registered a certain pattern, the alpha wave, signifying a relaxed state. The whole point of the exercise was to use bio-feedback to educate people in the art of unwinding. Now, however, the scope of such devices is widening, and the goals are getting more complex – to link, for example, different patterns of brain waves with tasks such as rotating an object via a computer.

  Even more sophisticated still, neurophysiologist Jessica Bayliss has been able to detect and monitor the tiny electric signal leaking through the skull that precedes an action – the p300 brainwave. This specific electric signal can now determine a virtual-reality environment. Soon it may be possible to control events in the real world through an almost invisible, wearable computer; the roving movements of your eye will function as the mouse and your p300 wave as the click. Generations to come could be living in a world where objects are moved all the time by seemingly invisible hands, and countered by others. The term ‘strength of will’ may take on a far more literal meaning…

  Some, such as the visionary physicist Freeman Dyson, have predicted that further in the future we will all be engaged in neurotelepathy – not some New Age mysticism but hard-nosed, direct interfacing between electronics and our brains that could make speech and action redundant. Whilst such ideas might make for an intriguing, or perhaps very boring, sci-fi film – no one doing or saying anything much to each other – my own view is that there will be no simple two-way street in and out of the brain. As we have seen, the divergent processing that occurs when signals go into the brain will make them very hard to simulate, or to leapfrog with some kind of central control, because there is no central control. But the convergent funnelling that goes behind a final single movement is far easier to intercept, just where the disparate central processing has been summed to a final command from the brain and is about to be translated into mechanical contractions of muscle. Thoughts will therefore be able to control objects in the outside world, but not the other way around.

  Instead, control over what goes back into the brain will come not with an intrusive and clumsy implant downstream in one of the tributaries of our sensory processing but at the entry portal itself, by controlling what hits the senses in the first place – the actual input. We might be finally on the brink of a world in which the virtual is as real and pervasive as the ‘real’ outside world. By fabricating a cyber-world that taps into all the senses, which then work in the usual way, neurotelepathy might have far more purchase on our minds, and hence our thoughts, than direct intervention via electrodes pushing into brain tissue.

  A still more pervasive invasion into our lives will not be a wholesale takeover by an alternative reality so much as an ‘augmented reality’ (AR). The goal of AR is to enhance your perception of, and hence performance in, the world. AR brings additional information beyond the raw inputs of the five senses. Labels, descriptions and information will superimpose on your normal vista; ultimately the user should not be able to tell the difference between the real world and the virtual augmentation of it. The whole point is that you will simply know more about what you are seeing and hearing from one moment to the next.

  One banal and obvious example of an early application will be instant information for tourists. The first generation of AR will be one view only, a few simple facts of generic interest, but the second generation will eventually be personalized for the user – emphasizing and suppressing features of a tour to cater for particular tastes and passions. Another immediate application of AR would be in surgical procedures. Imagine, for example, an operation on the brain aided by a scan superimposed on the patient's head in theatre with key areas lit up and labelled for the surgeon. Or imagine a military application – a pilot's view could be augmented with information – or engineering design using sections highlighted for manufacture, maintenance and repair. Another immediate possibility would be navigation guidance, warning of hazards ahead, for the visually disabled. Attention is now focused on the practicality of building AR into our daily routine. Scientists have already replaced the mouse with a simple finger-tracking system, and are now working on face recognition for use in wearable computers, embedded, for example, in spectacles. One prediction is that the first mass-marketed AR devices will be available by the end of this decade and will be the ‘walkman of the 21st century’; they will look like ordinary spectacles, with a light source on the side to project images onto the retina.

  Ultimately, AR could operate not just in the press of the senses around us but in all areas of life, from education to entertainment. In learning and using language, acronyms and synonyms could be explained, obscenity filters applied, and text read aloud in your favourite actor's voice. The AR system would be able to convey the meaning of a sentence known to the translator but not necessarily to the reader. There might even be attractive artificial additions to liven up a rainy day: Northern Lights, meteorites and supernovas. Meanwhile, real-time video filters could remove the pimples and wrinkles of those you see (and who see you). Even facial expressions could be changed. Not only would colours be super-bright but, in general, your senses could be sharpened beyond what is ‘natural’: you would be able to hear ultrasound and perceive X-rays, and radiowaves could be translated into visible light. A further intriguing medical application has been suggested – to create artificial yet highly conspicuous symptoms for diseases that lurk without making themselves easily apparent; for example, your toenails might go green as a result of the otherwise silent and invisible changes within your body, as hormone levels sink or blood pressure rises.

  The AI expert Ray Kurzweil has applied this line of thought to the question of how we might keep track of brain processes. He predicts that by 2020 we will be able to scan the brain from within, thanks to nanorobots roaming the interstices of our neuron networks, and reporting back on the latest configuration and current chemical transmitter availability. These nanoreporters would also send continuous updates on what parts of what networks were active. The big difficulty that he overlooks, however, is how to decode such data; moreover, the decoding would have to be different for every individual!

  In any case, it is not a new idea to devise some kind of window onto the living brain at work in a conscious human subject: brain scanning has been a familiar and valuable part of the neuroscientist's tool kit for some twenty years. The idea behind the earliest imaging technique, PET (positron emission tomography), is that the most hard-working parts of the brain can be pinpointed during the performance of a certain task. High-energy gamma rays travel fr
om the brain, through the skull, to strike sensors arranged around the subject's head; the sensors are connected to a computer, and the appropriate part will light up on a brain-map on the screen. The gamma rays result from collisions between the electrons in the brain and positrons, subatomic particles emitted from radioactive material. This radioactive material, usually appropriately radio-tagged oxygen or glucose, appears in the relevant part of the brain because it is essentially fuel for the brain cells to function; it will go where it is most needed at the time – to the most hard-working parts of the brain. The only drawback of this technique is that the oxygen and glucose have first to be introduced by injection into the bloodstream, and so there is a time lag before the radioactive label reaches the brain. The matching of the time of detection to the time something actually happened – the time resolution – will not therefore be precise, in that the images appear over a protracted timescale of minutes. Hence, although useful for showing up how the brain works during a sustained task or condition, this technique is unable to capture the split-second of a unique moment of consciousness.

  Another technique, one which, unlike PET, does not require an injection, is fMRI (functional magnetic resonance imaging). The underlying idea, however, is the same – to exploit the fact that the hardest-working brain regions are hungry for oxygen and glucose. The principle behind fMRI is to detect changes in haemoglobin, which carries oxygen in the blood to the brain. When subjected to a magnetic field, the nuclei of the atoms line up and emit weak radio signals that are then detected in a scanner. The intensity of the signal varies according to the amount of oxygen carried by the haemoglobin, and therefore serves as an index of the activity of different parts of the brain, pinpointing areas of one or two millimetres square. However, even in this technique the time resolution is over seconds – still slower than the sub-second timescale over which the brain operates.

 

‹ Prev