Tomorrow's People

Home > Other > Tomorrow's People > Page 25
Tomorrow's People Page 25

by Susan Greenfield


  However, in the early days some were sloppy in how they used this new term. There were those who used the word as a label for any device that was unusually small and thus might have an unusual job, such as acting as a minuscule submarine to monitor, and even scour, the furring up of arteries. But often these developments were really operating on the microscale, at least 1,000 times larger than a nanometre. It was important to distinguish ‘true’ nanoscience from the development of merely very small machines. Such devices, micro-electromechanical systems (MEMS), were nonetheless awesome – miniature sensors and motors about the size of a dust particle. These sensors and motors were etched onto silicon wafers using the same technique as in the microchip industry. Applications in those early days included an air-bag motion detector the size of a whisker, which allowed a reduction in the cost of certain laboratory equipment from $20,000 right down to $10, microdevices that could thread blood vessels, and cheap, pressure-sensitive devices embedded in steel and other building materials to detect the kinds of stress occurring in an earthquake, or on the surface of aircraft wings to detect stress during flight. However, after 2020 MEMS were ousted by machines on the truly nanoscale.

  To get an idea of this scale, of just how small the nano-range actually is, consider that a human hair is an awesome 10,000 nanometres thick. The nanoworld, as the science writer Gary Stix pointed out, is the ‘weird borderland between the realm of individual atoms and molecules (where quantum mechanics rules) and the macroworld (where the bulk properties of materials emerge from the collective behavior of trillions of atoms, whether that material is a steel beam or the cream filling in an Oreo [an American biscuit])’. As such, nanotechnology defines the smallest natural structures; put succinctly, it is impossible to build anything smaller. The burgeoning interest in the nanoworld has stemmed from the idea that structures on this small scale may have superior electrical, chemical, mechanical or optical properties. Once conventional silicon electronics ceased to work around 2020, then the new nanotechnology clearly offered the most realistic and attractive alternative.

  It was in the last decade of the last century that scientists had started to take nanoscience seriously, following a breakthrough when scientists at IBM arranged 35 xenon atoms on a nickel surface to spell out their logo. Funding started to soar, even outside of the USA, rising from $316 million in 1997 to $835 million only four years later. Advocates of nanotechnology waxed lyrical about its potential, and indeed its impact on everyday life, university research and commerce in the 21st century. In the mid 1990s it cost $1,000 for the Nano-phase Technologies Corporation to produce a single gram of nanoparticles; within a decade a gram cost only a few cents and could be used in products as disparate as odour-eating foot powders and ships. Within a few years nanotechnologists were going on to offer affordable solutions to complex and novel problems, such as the replacement of palladium, a costly component used in many cars for catalytic conversion.

  All this direct application of basic science to industry fuelled a push for public money. In 1999 Bill Clinton announced $422 million for a National Nanotechnology Initiative; his successor George W. Bush announced a further $487 million in 2001. Governments around the world are estimated to be investing billions in basic nanotechnology research and development. This kind of backing was hardly surprising since basic research into nanoscience was tool-driven, leading immediately to practical applications in virtually all aspects of life. Soon drugs were developed to detect and kill cancer cells before they could do any extensive harm, whilst Boeing 747 jumbo jets were quickly being built at one-fiftieth of their previous weight as new types of materials became available. New nano-agents rapidly became routine additions to vents of hospitals for detecting disease, whilst others attached only to cholesterol, so that they could be selectively targeted in hardened arteries. The domestic scene was also undergoing a revolution as new fabrics became available that were cool in summer but warm when the weather turned colder, and a new generation of household paints was developed to repel dirt. But the real excitement was not that previously everyday objects and products were suddenly ‘smart’ but that they had acquired astonishing, utterly novel properties. Now that surfaces of materials were layered with atomic precision, everyone soon became used to phenomena that previously would have seemed utterly impossible, such as frictionless bearings, scratch-proof spectacles and more powerful fibre-optic cables.

  Admittedly it had not all been plain sailing. One of the original worries was that, at the time, there was no appropriately scaled-down wire: after all, any potential molecular machine was useless as long as the different components within it were unable to connect with each other. This difficulty was overcome with ‘nanotubes’. Nanotubes, still very like their turn-of-the-century prototype, are so thin that you would need 50,000 side by side to cover the width of a human hair; they are made by heating carbon to a vapour, then condensing it in a vacuum or inert gas. Amazingly, the carbon atoms then famously arrange themselves into classic, football-like hexagons – buckyballs – arranged in a long cylinder that not only conducts electricity but is some 100 times stronger than steel and weighs one-sixth as much.

  Already, within half a century of Feynman's original vision, scientists had the eyes and fingers to manipulate nature's building blocks. The ‘eyes’ were microscopes a million times more powerful than the human eye. The prototype Scanning Tunnelling Microscope (STM) dragged a tiny needle across the surface of an object. Electrons could then ‘tunnel’ across the electrical barrier between the needle and atoms, creating a current that measured the position of atoms. Going one further, to act as ‘fingers’, the needle could move individual molecules to create stable hexagonal rings at room temperature by applying a voltage through its super-sharp tip. An additional refinement soon followed in the Atomic Force Microscope (AFM), which also probes surfaces and produces topographical images of individual atoms. These nanoscale fingers and eyes soon started to provide new knowledge about how matter operates at the atomic and molecular level. As a result it became possible to put together molecules that have never been connected before, not just nanotubes but nanotweezers for grabbing and pulling molecules; for the first time, scientists started to manipulate the different components within a single human cell. Now artificial chromosomes could be introduced with ease. Indeed, by 2030 sophisticated machines for even more precise manipulation of individual atoms were commonplace. In the first few decades of the 21st century there was no limit to the nanotechnologist's dreams: ‘If nature has figured out how to arrange the atoms in coal to make a diamond, then we should be able to do the same.’ So one expert at the time predicted, envisaging that within a mere twenty-five years there would be a new passion for alchemy, but with a scientific basis, whereby the wonders of nanoscience would transmute the banal into the special, as well as generating life-saving materials such as bones, spinal cords and human hearts.

  But then the real problems started. Although the STM and AFM could move individual particles they were really too slow for mass production. Moreover, whilst careful control of chemical reactions could assemble atoms and molecules into small nanostructures of 2–10 nanometres, the process failed to produce designed, interconnected patterns of the type needed for electronic devices such as microchips. Moreover, despite Feynman having pointed out that there is nothing in the laws of physics that precludes the construction of nanodevices, not one molecular machine on the true nanoscale has ever, even now, actually been made in any lab. The problem has been that as micromachines gave way to their nanoscale counterparts the goalposts moved: the compliant and permissive laws of physics concede to the more capricious and non-negotiable ones of chemistry.

  For example, the formation of bonds between atoms may also affect those atoms in other ways, which we cannot necessarily predict or understand. Quantum mechanics will probably prevail over events in small electrical devices where a single electron can dominate current flow, but the smaller a device the more susceptible its physical pr
operties are to alteration. No longer does the physics of bulk dominate but rather the chemical properties of surfaces become all-important. In a silicon beam 10 nm wide and 100 nm long, as many as 10 per cent of the atoms are on or near the surface. Even nanotubes in the real world have electrical properties that are very different once away from their original and rarefied ultra-high-vacuum habitats. Clearly such a high proportion of surface molecules will play a central role, but how?

  In general, we still cannot determine how arbitrarily assembled neutrons, protons and electrons will finally behave en masse. The really great challenge for nanoscience is therefore to find a way of assembling such small components together in a purposeful, controlled fashion. Unlike conventional circuit design, you find yourself beginning with a haphazard jumble of as many as ten to twenty-four nanocomponents and wires, not all of which will work and from which you must somehow gradually shape a useful device. Or to rephrase the problem: you need to think of a means of linking the nano- and microworlds. For quite a while now these deep conceptual difficulties have tempered the initial, unconditional enthusiasm for nanotechnology. The chemist David Jones, way back at the dawn of the nanotech era, captured the mood of reality check: ‘How do these machines know where every atom is located? How do you program such machines to perform their miraculous feats? How do they navigate? Where does their power come from?’

  Although these questions are still unanswered, a core dream of some nanoscientists is nonetheless that nanomachines will be able to control themselves and be independent of interference from on high, from the human macroworld. The true convert envisages self-constructing machines, ‘bottom up’, with nanoassemblers and nanorobots. Such self-replication would result from gears and wheels no larger than several atoms in diameter, which would give rise to atomic-size machines that could scavenge molecules from their environment to reproduce themselves. These molecular robots, some tenth of a micrometre in size, could then manipulate more individual atoms. Such nanomachines could operate like bacteria, or viruses, for good or ill: they could destroy infectious microbes, kill tumour cells, patrol the bloodstream, and devour hazardous wastes in the environment, thereby enhancing the production of cheap and plentiful food. They could go on to build other machines, from booster rockets to microchips and supercomputers the size of atoms…

  But perhaps most fantastical of all is the idea that nanotechnology could reinstate the original patterns of atoms and thus ultimately of cells that have been damaged, and so in theory, at least, reverse their ageing. You are highly sceptical, as are many of your colleagues, but the concept is straightforward enough. Ultimately, the frozen brains of the newly dead could be restored to their original atomic configuration by undoing the damage usually caused in thawing as ice crystals break down the cell wall. From this concept a whole industry, cryonics, has been flourishing in the USA since as far back as the previous century. The plan is that you sign up to be frozen once you have died, in the hope that once this technology of precision atomic rearrangement has been perfected, plus that of identifying the ‘genes to survive’, you might end up immortal.

  Already there are many customers; however, the concept of a gene ‘for’ survival, as ‘for’ any complex process or trait, might be more than a little naive, as we saw previously. And even if we could discover Methuselah genes that we then used to enhance the genomes of our progeny, you, the cryonics ‘patient’ from a bygone era, would have missed out; in any event, the genes would not, even at best, guarantee living for ever, merely a prolongation of life. A further dependence on gene technology of more immediate importance is that in many cryonics schemes the patient is frozen as ‘head only’, in the confident assumption that the technology to grow a new body will also exist in the future. Let us hope that the nanotechnological reversal of damaged cells and revival from freezing do not precede that science of growing a new body. The prospect of disembodied but conscious heads suddenly joining the world population is surely one of the least palatable our imaginations could conjure up.

  Another plot ripe for Hollywood would, in any case, be the revival of cryonics patients en masse. Even making the big assumption that one day such a thing would be possible, would it actually happen? In centuries to come, although a few individuals might be brought back as medical freaks, psychological specimens or first-hand historical sources, it is hard to see any point in such a wholesale exercise. The world is already overburdened and its resources stretched; the population is teeming – why wake up a load of once sick, mainly old people from long ago?

  Nonetheless, if we suspend any logical reasoning for a moment, it is intriguing to contemplate what it might be like if such individuals from different times in the past were, after all, brought back as a matter of routine. Each would bring the culture and ideas of different times and places, and as they established a significant presence within the population no one could assume a common knowledge-base any longer. On the other hand, we have seen that no one will need to know anything anyway, but could access databases on a just-in-time basis; the problem therefore would be essentially one of cultural adjustment for the newcomers. Another issue, however, of most relevance would be whether, when the person died once more, they could have another spell of freezing or would they live for ever? If they are to be immortal, the planet will soon be overpopulated. However you look at it, from scientific viability through to political, sociological and economic implications, effective time travel with cryonics is about as likely as, and less attractive than, travelling faster than the speed of light.

  There is another far-fetched consequence of nanotechnology, though arguably slightly more realistic. Because machines could be self-replicating, they would cost very little. The downside, recognized by even the most ardent nanotechnologist, is that if the whole process got out of hand, the world might end up covered in a ‘grey goo’ of self-replicating material that choked everything else. It was just this type of scenario that featured in Michael Crichton's 2002 novel Prey. In reality, although a nanorobot working on its own would take some 19 million years to make one ounce of matter, self-replication could speed things up. For example, if a billion atoms were doing the job, then one second would see the production of some fifty kilograms of matter!

  There were those at the beginning of the 21st century, such as chemist Richard Smalley, who never needed to resort to science-fiction paranoia in order to point out the disadvantages and problems associated with the notion of self-assembling nanodevices. In an ordinary chemical reaction there are some five to fifteen atoms near the reaction site that work in a three-dimensional operation, with no more than a nanometre on each side. The big problem is that any nanorobot would have to control not just one atom but all the existing atoms in the region of the reaction. But since the robot itself is of an irreducible size, being itself made of atoms, it has ‘fat fingers’ that will hamper appropriate manipulation. There is simply not enough room in the nanospace to accommodate all nanofingers and have complete control of the chemistry.

  And not only are these fingers fat, they are also sticky: because of the nature of the bonds that make up a molecule, it will be hard to differentiate, and then release, an atom for construction from a finger atom, once the appropriate manoeuvre is completed. Now add to this insuperable difficulty the ever-prominent problems of miniaturization – we have already seen that surfaces become all-important – and with them many other inherent problems including those posed by friction and sticking and it is not surprising that as yet a self-replicating machine has never been built on any scale.

  Advocates of nanotechnology argue that it is feasible by drawing analogies with nanomachines that already exist in nature; they point to the process whereby a genetic instruction is translated into the manufacture of a protein via RNA, or the conversion of light into energy in plants via the chloroplast. Another example is the baton-passing of electrons within the erstwhile bugs, mitochondria, that in more recent evolutionary times have moved into cells where they conv
ert oxygen into energy. Now compare the artificial nanocounterparts: even though their small size would require only a small amount of power, it would have to come from somewhere. A more serious problem still is the amount of information a self-replicating machine would need to make itself, to collect from the environment all materials necessary for energy and fabrication, and to assemble unaided all the pieces necessary to make a copy of itself. To answer these needs in biological systems DNA and mitochondria come to the rescue; but it is less obvious where to start without these mainstays of living cells. As a chemist, you ponder whether there is any nanotechnological device made that is independent of DNA that could ever really be more efficient. And in any case, what would be the point of trying to improve on evolution?

  In your view, a more valuable use of nanotechnology is for it to assist biological, DNA-based systems in new medical applications. For example, nanoparticles can now deliver to specifically targeted sites, including places that standard drugs do not reach easily. Another possibility could be to reveal what sets of genes are active under certain conditions, for example with gold nanoparticles bound to a DNA-probe: if the DNA in question is at work they will bind, and a colour change will occur. Another device might enable quick diagnostic screens: antibodies labelled with magnetic nanoparticles now exposed to a brief magnetic field respond with a strong magnetic signal if they are reacting with certain substances. And gold nanoshells linked to specific antibodies that target tumours could, when hit by infrared light, heat up enough to destroy growths selectively.

  Additional biomedical applications of nanoscience are starting to mushroom, for instance, substances that massively enhance key tissue for non-invasive imaging of the brain or body. Another advance has been nanoscale modifications of implant surfaces to improve durability within the body as well as biocompatibility, so now you could be fitted with an artificial hip coated with nanoparticles that could bond to surrounding bone more tightly, avoiding loosening. Another idea is that of a dendrimer, a kind of coat hanger about the size of a protein but which does not come apart or unfold as proteins do, therefore allowing for stronger chemical bonds. These nanoscale coat hangers could act as a means of delivering new types of drugs.

 

‹ Prev