Book Read Free

Tomorrow's People

Page 11

by Susan Greenfield


  ‘Machines will probably surpass overall human intellectual capability by 2020, and have an emotional feel just like people,’ promises Pearson. Well, we have seen that computational powers may well overtake us, but these don't necessarily entail a superior intellect. As physicist Niels Bohr, a Nobel laureate, once admonished a student, ‘You’re not thinking, you're just being logical.’ And as for having emotions ‘like people’, there is no evidence at all that such soft-hearted mechanical beings could or would come to pass. The critical issue is whether they indeed develop ‘self-awareness and consciousness’; despite diverse and competing approaches, we saw earlier that nothing so far in AI or IT suggests that there are any grounds at all for taking such an outcome as an article of faith.

  But before we all sink back into complacency, we should examine the alarming prospect that ‘relating to machines will be more pleasant than dealing with humans’. True, they may not be conscious, and they may not be hell-bent on world domination – but there is a clear threat that is just as sinister and certainly more likely. First is the matter of how much more positively we shall interact with machines of the future than with our co-species: although at the moment we are used to directing our emotions at a variety of inanimate objects – not just computers but cars and toys as well – in the future we shall spend much more of our time interacting with advanced IT and sophisticated robots as opposed to humans. In fact, we shall inevitably find these artificial interlocutors more predictable, reliable, efficient and tolerant of our temper outbursts, stupidity and egotism. Gradually we could become more petulant, impatient, less able to think through problems, both social and intellectual, and utterly self-obsessed. And the poorer we become at social interactions the more we will seek solace with our cyber-friends. One difficulty for a mid-21st-century family enveloped in cyber-living may be that the family will seem ‘boring’ to each of its members, compared to the indulgent companionship of the net. And the more people immerse themselves in the net, as some might argue is happening already, so they will cease to develop the appropriate social skills of give and take that Nancy Mitford noted in the mid 20th century: ‘The advantage of living in a large family is that early lesson of life's essential unfairness.’

  To take things even further, if you never have to consider the thoughts and actions of others – because the cyber-world is endlessly accommodating and forgiving – there might even be a progressive retreat into that world. Generations to come may live each in their own inner world, reacting to and with machines, preferring a virtual time and space and family to their own flesh and blood. We are already sadly familiar with this phenomenon in autistic children; such children have difficulty attributing independent thoughts and beliefs to other people. They do indeed see other people as machines that have no emotions and with whom they are unable to establish a relationship. Could the new technologies be predisposing society to produce a larger number of autistic-seeming children, who would not play a full or active part within what's left of the traditional family unit?

  Just as the cyber-friends will help us escape from the old constraints of living in the real world, so too that real world itself will be increasingly dominated by invisible and ubiquitous computing; our grasp of ‘reality’ and our notion of a stable, consistent world ‘out there’ might start to disintegrate, as we live from one moment to the next as the passive recipients of wave upon wave of multimedia information flooding into our brains.

  But stranger still will be the control those same brains have on the outside world. Our mere thoughts might move objects, and we will witness those around us doing likewise. Although the nature of brain processing probably discounts the possibility that we will be able to hack in to someone else's brain with email via a convenient and trendy-looking implant, psychokinesis – if not full neurotelepathy – might no longer be just the stuff of weirdo-babble. Yet the very feel of thinking things to move, of wearing the small devices that enable us to do so, not to mention having internal prostheses that enhance our senses and abilities – all these developments will transform how we think of our bodies, in terms of our abilities and in terms of their boundaries with the outside world. IT and AI will grow into a neurotechnology that blurs reality and fantasy, and dilutes that previously solid ‘self’ into a wash of carbon-silicon phenomena that will amount to life and living. And if this is how we will be, then what shall we actually be doing?

  4

  Work: What will we do with our time?

  ‘So what do you do?’ The classic opening gambit almost everywhere in Western society. Rightly or wrongly work defines you. It can give you status, encapsulate in a word your skills and knowledge, and even hint strongly about your predispositions and emotional make-up. Imagine then a society where jobs no longer exist. The notions of ‘workers’ and ‘management’ have long been consigned to history, as all the late 20th-century dreams of the human resources industry have come to pass. Instead of a cumbersome crowd of biddable operatives the workforce is comprised of flexible, curious, commercially savvy individuals who are fully aware of their own strengths and weaknesses. These paragons accordingly take responsibility for planning their own career paths. Everyone is an expert or specialist in something, and on the alert for lifelong retraining. On an almost daily basis these multi-module operatives will make instant, plug-in-and-play contributions to small, highly flexible companies with fluctuating daily needs.

  Alternatively, the change in work patterns might throw into sharp focus the deep and terrifying question of your intrinsic worth as a person. Just think of the bleak prospect of an insecure, anxious society; most people feel inadequate, unable to keep up with the pace of change or cope with the uncertain nature of their employment. In times to come, far from a simplistic split according to colour of collar, there might be the more invidious distinction of the technological master class versus the – in employment terms – truly useless.

  Which is it to be? If the IT revolution is changing the way we view ourselves in relation to the outside world, then its influence on what we actually do there will be immediate and far reaching; within the workplace this cyber-upheaval will determine how we interact with other people and things, and hence how we see ourselves. The computer, more than any other single object, will drive the change in work patterns, and even redefine the concept of work itself.

  Apparently, it takes some fifty years to optimize a technology – two generations or so for it to assimilate fully, through all the institutions and functions that make up an economy. The computer, icon as it is for a genuine revolution, analogous to the transition from steam to electric power, fits well into this timescale. For the first twenty-five years, say from 1945 to 1970, IT followed electricity in its initial unreliability and inefficiency; it had no measurable impact. Then for the next twenty-five years, into the 1990s, it was still expensive, non-standardized and unreliable, although purchased in large numbers. The workforce did not really know how to use it well, nor management how to apply it.

  Only at fifty years old, did the ‘new’ computer technology become hyperproductive, delivering all its promises simultaneously. At last, at the turn of the century, IT has finally matured into adjectives such as ‘cheap’ and ‘easy to use’, with the tsunami of applications and knock-on implications it has for our lives. But just as IT has come of age, so it might be simultaneously doomed – at least in its familiar silicon guise, powered by fossil fuels. As we saw in the last chapter, Moore's Law, which predicts that computer power will double every eighteen months, can't hold true for much longer. The big problem is that the workhorse components of the computer, its transistors and wiring, cannot shrink any smaller than the width of an atom. So, if computers are to be powerful enough to support and sustain the dramatic reality-changing devices that are otherwise technically feasible, then an alternative, fundamentally different type of computational system will soon be needed. The future of work is therefore tightly intertwined with that of the computer, or rather with the iss
ue of what its successors might be, and what they will be able to do.

  Up until now there have been four ways of conveying information, by means of numbers, words, sounds and images. But new bio-based technologies, the digitizing of smell, taste and touch along with such slippery phenomena as intuition and imagination, might soon be adding to the richness of our cyber-environment. But biology could make a more fundamental contribution still. Whilst silicon implants might aid the flagging carbon-based brain, so the unique properties of living systems are providing inspiration for utterly different forms of IT: there might even be a case for a novel term, BioIT (biological information technology). Incredible and downright unlikely as such carbon-silicon hybrids might seem, the nascent technology already exists. It might be hard to believe that neurons could communicate readily with artificial systems but they appear to do so with ease. For example, neurons are now being cultivated ‘in vitro’, literally in glass, along with growth-promoting material, to form narrow tracks in whatever geometry might be desirable for a particular circuit; such bio-wiring could then be hooked up to its silicon counterparts. The neuron would be no more and no less than a new type of electronic component, a ‘neurochip’.

  Neurochips could indeed become the basic component of the future semiconductor computer. In a normal transistor current flow is modulated by the control of the voltage. Meanwhile, a cornerstone of physiology is that neurons generate a steady-state voltage (the resting potential) that can switch sharply when one cell is signalling to the next with a brief electrical blip (an action potential). Neurons change their voltage in this way by the opening and closing of minuscule tunnels in their membrane walls, through which ions (charged atoms) can flow into or out of the cell. This trafficking of ions amounts to momentary changes in the net charge between the inside and outside of the neuron, namely differences in the voltage across the membrane wall (potential difference).

  In a recent ingenious development at the Max Planck Institute in Martinsried, Germany, Peter Fromherz has been exploiting this phenomenon – and the fact that neurons are ultra-efficient at voltage control – using the neuronal membrane wall as a gate contact for a transistor; so far a single neuron can cover sixteen closely packed transistors. To make prototype neurochips, snail neurons are puffed onto silicon chips layered with a kind of glue, placed over the transistors and held in check with tiny polymer picket fences. As the neurons grow to make connections, synapses, so the transistors amplify their tiny voltages.

  These snail cells are ideal for the prototype neurochips because they are extra large and therefore easy to manipulate, and they can send electrical signals to each other in the usual way of living systems, as well as communicating with the non-biological, electrical components of the chip. Each cell sits over a Field Effect Transistor, capable of modifying tiny voltages. Although it seems hard to credit that as fragile a biological entity as a neuron could survive in such an artificial and isolated environment, away from the comfort and protection of the cerebral mother-ship, the displaced cells positively flourish. In fact, the scientists developing the project actually have to place physical barriers on the chip to stop unrestricted cell growth, which might otherwise eventually throttle the delicate circuitry that they are trying to establish. So far the team have been working with an array of some twenty cells. They have now devised a chip in which the electrical blip, the action potential, travels to a transistor, and from there to another transistor, and finally on to a second neuron. But Peter Fromherz is currently planning an awesome 15,000 such neuron-transistor sites!

  Soon therefore we might see silicon-carbon hybrid neurocomputers with the potential to be far more compact and efficient than the standard machines of today. But some might feel queasy at this blurring of the distinction between living and non-living matter. Could such a system ever be conscious? Would it therefore feel pain, or, even more problematic, develop its own agenda – perhaps that most sensationalist and feared one of world domination? To all these questions the answer would be ‘no’. Remember that the biological component, the isolated neuron, is being used solely for its electrochemical efficiency, completely outside of the context of the three-dimensional brain, and indeed away from the brain-in-a-body. Since the system is deprived in this way of all other, as yet unidentified, essentials for consciousness, the risk of its developing any subjective inner state would surely be effectively zero – as it always has been for the vast range of isolated bits of brain found in labs, grown routinely as cells on a dish or slices kept alive for hour after hour in a routine experiment. Instead, consciousness is an emergent property of complex chemical systems within bodies, of which the enormous number of neuronal circuits compressed into brain tissue are still only a part. Here the situation is starkly different: the neuron is merely an electronic bit player, removed from the rich chemical and anatomical landscape of the brain; it is not that much different therefore from the all-silicon systems that featured earlier, just better at the job. Once the silicon-carbon hybrids are in use, it is unlikely that such qualms would even occur to our successors, who will rapidly come to accept the blurring of the current categorical distinction between carbon- and silicon-based systems – just as a century ago humanity accepted horseless carriages.

  Meanwhile, at the Technion-Israel Institute of Technology, scientists are using an even more fundamental biological building block, DNA, to tackle the problem of how to make computers smaller: let the circuitry ‘grow’. Uri Silvan, the head of the team, sums up the situation:

  Conventional microelectronics is pretty much approaching its miniaturization limit… If you are really able to fabricate devices with [much smaller] dimensions, you could squeeze roughly 100,000 times more electronics, or even a million times more electronics, into the same volume. That would mean much bigger memories and much faster electronics. In order to do that we need materials with self-assembling properties. We need molecules into which we can encode information which later will make them build themselves into very complex structures. Information is stored in the DNA in this way, information which is used by biological systems to build very complex molecules. We are really trying to copy that idea.

  So, Silvan is programming DNA to grow a strand of protein between two electrodes, which is then turned into a wire by depositing atoms of silver on it. An even more radical option still, based once again on DNA, is to dispense with conventional electronics altogether. In 1994 Leonard Adelman, a computer scientist at George Mason University, Virginia, pioneered the concept of a DNA computer. Each molecule of DNA stores information, like a computer chip. And the shift in scale is dramatic: 10 trillion molecules of DNA can fit into a space the size of a child's marble. A few kilos of DNA molecules in about 1,000 litres of liquid would take up just one cubic metre – and store more memory than all the computers ever made.

  In a DNA computer the input and output are both strands of DNA, with DNA logic gates (for example, the ‘and’ gate chemically binds two DNA inputs to form a single output). Although the system resembles a conventional computer, in that they are both digital and therefore can be manipulated as such, the silicon computer has only two positions, o and 1, whereas its DNA counterpart has 4 (the nucleic acids that make up the genetic code: adenine, A; cytosine, C; guanine, G; thymine, T). This doubling of the number of positions means that a fledgling DNA computer has already solved, in one week, a problem that would take a standard computer several years, storing over a 100 trillion times more information. This staggering feat is possible because one step in computation will affect a trillion strands of DNA simultaneously, so the system can consider many solutions to a problem in one go. Now add to this appealing arrangement another colossal advantage over silicon computers: since the DNA computer is a biological system excessive heat generation will not be a problem, so it will be a billion times more energy efficient.

  The great weakness of DNA computers, however, is that their molecules decay, and therefore they would be no good for long-term storage of
information. Another question, raised by the physicist Michio Kaku, is whether they could ever be very versatile, as the solution of each potential problem would require a unique set of chemical reactions. These reservations notwithstanding, DNA computers could be useful eventually for number crunching, within broad classes of problems.

  So far, all the biological encroachments that we have been looking at in current computer technology, be they neurochips, neurocomputers, DNA wires or DNA computers, still have a very basic feature in common with the most modest laptop: they are digital – they process information in an all-or-none way. Not so an even more innovative type of computer, first theorized some twenty years ago by the physicist Paul Benioff: the quantum computer. As you might guess from the name, this concept is a machine that works on the principles of quantum physics. Unlike digital machines, a quantum computer could work on millions of computations at once.

  If quantum computers really could be developed for practical applications, they would represent an advance comparable to the advent of the transistor in replacing the vacuum tube. In fact, the system could be used initially as a transistor – a means of regulating the current flow, on or off, of a single electron occupying some 20 nm of space (a quantum dot) that thereby generates a ‘bit’, a 1 or o. But this most immediate use of quantum transistors is trivial in comparison with the long-term application of quantum theory to computing. The novelty, and power, of this approach lies in the fact that the bit will not be 1 or o but could also represent somewhere in between. So, the quantum bit (‘qubit’) is very different from the bits in conventional computing. Since qubits can be between 1 and o (superposition), the potential number of states for the computer is astronomical. A quantum computer with only a paltry 100 qubits, for example, will nevertheless have a mind-boggling 1029 (a hundred thousand, trillion, trillion) simultaneous states.

 

‹ Prev