Book Read Free

Borderlands of Science

Page 28

by Charles Sheffield


  9) We introduce into our beaker of acceptable-length strands the complementary DNA string for each town. We do this for one town at a time. Thus, the complementary string for Hull is AGCGCCCTAA TCTGACATTC.

  Anything with the single strand for Hull will attach to this complementary Hull strand. We separate out only strands with such complete Hull double strands. Into this beaker we introduce the complementary DNA string for Hornsea, CAAGCTTCAG TCAGCATGGA. Only DNA strands with the Hornsea DNA single strand will attach, so we can now separate only those strands containing both Hull and Hornsea complete double strands.

  In the same way we use the complementary single strands for Beverley, Weighton, and Driffield, to generate strands that must contain complete double DNA strands representing all five towns.

  10) Finally, we select out the shortest strands from all those that pass our tests. Such strands visit each town, and they do so with the shortest possible distance. Analysis of those DNA strands will tell us both the town-to-town route and the distance.

  This may seem like an awful lot of work to solve a very simple problem, and of course for a small case like this the DNA computer is certainly overkill. With realistic quantities of DNA, we would find not one strand giving the solution, but billions or trillions of them. Also, no one in their right mind would try a simple exhaustive search method on the Traveling Salesman Problem. It is totally impractical for large-scale networks of cities, and different methods are applied. We give it merely to show an example of the technique.

  When DNA computing becomes more sophisticated, we should be able to tackle much harder and bigger problems. Soon after Adleman's original paper, Richard Lipton pointed out how a DNA-based computer could address a difficult class of searches known as "satisfaction" problems (Lipton, 1995). Since then, biological computers have been taken seriously as a computational tool with great although unmeasured potential.

  * * *

  10.3 Quantum computers: making a virtue of necessity. Computers, as they were envisaged originally by Charles Babbage in the first half of the nineteenth century and implemented in the second half of the twentieth, are deterministic machines. This happens to be one of their principal virtues. A calculation, repeated once or a thousand times, will always yield precisely the same answer.

  However, as the size of components shrinks toward the molecular and atomic level, indeterminacy inevitably creeps in. The "classical" computer becomes the "quantum" computer, in which quantum effects and uncertainty appear. As we pointed out in Chapter 2, this is an absolutely essential and inescapable consequence of quantum theory, and if the components are small enough there is no way that quantum effects can be either ignored or avoided.

  Is there any way we might make a virtue out of necessity, and use quantum effects to improve the performance of a computer? That question has been asked in detail only in the last few years, though Richard Feynman wondered about the possibility in 1985. The answer is astonishing: a "quantum computer" seems to be theoretically possible (none has yet been built), and its performance may permit the solution of problems quite out of reach of a deterministic, classical computer.

  The classical computer is built from components that each have two possible states, which we might label as "on" and "off," or "up" and "down," or 1 and 0. Any number, as we remarked in discussing biological computers, can be written as a string of 1s and 0s; e.g., the decimal number 891,525 is 11011001101010001111 in binary notation. Binary to decimal and decimal to binary conversion is easy for any number whatsoever.

  Our quantum computer will use as components individual electrons. Each has two possible spin states, which we label "u" and "d" for up and down. Twenty electrons would then represent the number 891,525 as ddudduuddududuuudddd.

  So far we seem to have accomplished nothing. However, recall that according to quantum theory an electron can be in a "mixed state," part u and part d. A mixed state with two components is termed a quantum bit, or qubit, to distinguish it from a classical binary digit, or bit. The classical binary digit is either 0 or 1 (u or d). The corresponding qubit is simultaneously 0 and 1 (u and d).

  If we know only that the state of each electron is a mixed state, 20 of them—20 qubits—might represent 220 different numbers. If we perform logical operations on the group of electrons, without ever determining their states, then we have performed computations on all possible numbers they might represent. Operations are being performed in parallel, and to do the same thing with a classical computer we would need 220 processing units—more than a million of them.

  The choice of a string of 20 electrons was arbitrary. We could have as easily chosen 100 qubits, or 1,000. That is still a tiny set, compared with the number of electrons in any electric signal. However, 21,000 is about 10300. We have a possible parallel operation at a near-incomprehensible level.

  The principles described here are clear. Some possible practical applications are already known. For example, the parallel processing provided by qubits can be used to decompose numbers into their prime factors. This is a difficult problem, which with classical computers cannot be solved in practice for very large numbers. The computation time becomes too enormous.

  However, we have skated over some of the difficulties involved with quantum computers. The worst one is the extreme sensitivity of a quantum computer to its surroundings. No computer can be completely isolated from the rest of the universe, and tiny interactions will disturb the mixed states needed for the qubits. This is termed the "problem of decoherence," and its practical effect is that any problem solved on a quantum computer must be completed before interaction of the environment causes decoherence.

  Quantum computers are very much on the science frontier. Their development stage today may be like that of classical computers in the mid-1940s, when some of the world's smartest people doubted that a reliable electronic computer would ever be built—the failure rate of vacuum tubes was too high. Today, transistors and integrated circuits are so reliable that a hardware error is the last place to look when a program fails (the first place we look is in the computer program code, generated by that quirky and unreliable computer, the human brain).

  10.4 Where are the robots? Science fiction writers did a poor job predicting the arrival of near-ubiquitous general purpose computers. What science fiction did predict was robots, mechanical marvels capable of performing all manner of tasks normally associated with humans.

  Robots came into science fiction three-quarters of a century ago, in Karel Capek's play R.U.R. (Capek, 1920). They have been a staple element of science fiction ever since. In the real world, robots have fared less well. Either they have been confined to the role of robot arms at a fixed location, performing a few limited operations on an assembly line; or they have been slow-moving, clumsy morons, trundling their way with difficulty across a simplified room environment to pick up colored blocks with less skill and accuracy than the average two-year-old.

  What went wrong? And when, if ever, will it go right?

  The big problem seems to be human hubris. We, aware of the big and complex brains that set us apart from every other animal, overemphasize the importance of logical thought. At the same time, we tend to diminish the importance of the functions that we share with animals: easy recognition and understanding of environment, easy grasping of objects, effortless locomotion across difficult terrain.

  But seeing and walking have a billion years of development effort behind them. We do them well, not because they are intrinsically simple, but because evolution has weeded out anything that found them difficult. We don't even have to think about seeing. Logical thought, on the other hand, has been around for no more than a million years. No wonder we still have trouble doing it. We are proud of our ability, but a fully evolved creature would find thought as effortless, and be as unaware of the complexity of operations that lay behind it, as taking a drink of water.

  Recognizing the truth does not solve the problem, but it allows us to place emphasis in the appropria
te area. For many years, the "difficult" part of making a robot was assumed to be the logical operations. This led to computer programs that play a near-perfect game of checkers and a powerful game of chess. The hardware/software combination known as Deep Blue beat world-champion Kasparov in 1997, though human fatigue and stress were also factors. At the same time, the program was as helpless as a baby when it came to picking up a chess piece and executing a move. Those functions were performed by Deep Blue's human handlers.

  So when will we have a "real" robot, one able to perform useful tasks in the relatively complicated environment of the average home?

  The answer is: when development from two directions meet.

  Those two directions are:

  1) "Top-down" activities, usually referred to as Artificial Intelligence, or just AI, that seek to copy human thought processes as they exist today. AI, after a promising start in the 1960s, stumbled and slowed. One problem is that we don't know exactly what human thought processes are. As Marvin Minsky has pointed out (Minsky, 1986), the easy part is modeling the activities of the conscious mind. The hard part is the unconscious mind, inaccessible to us and difficult to define and imitate.

  2) "Bottom-up" activities, that start with the basic problems of perception and mobility, without introducing the idea of thought at all. This is an "evolutionary" approach, building computers that incrementally model the behavior of animals with complex nervous systems. We know that this can be done, because it happened once already, in Nature. However, we hope to beat Nature's implementation schedule of a few billion years.

  When top-down and bottom-up meet, in what Hans Moravec refers to as the "metaphorical golden spike" of robotics (Moravec, 1988), we will have a reasoning computer program (or, more likely, a large interconnected set of programs) with a good associated "lower-level" perception and movement capability. In other words, robots as science fiction has known them for many years.

  When?

  Moravec says in fifty years or so. He is perhaps not entirely impartial, but if we do not accept the estimates of leaders in the robotics field, whom do we believe? If you introduce working household robots into a story set in 2050, at least some of today's robotics specialists will offer you moral support.

  In making his estimate, Moravec relies on two things. First, that the projections quoted at the beginning of this chapter on computer speed, size, and costs are correct. Advances in biological or quantum computers can only serve to bring the date of practical robots closer.

  Second, Moravec believes that when the necessary components come together, they will do so very quickly.

  We have to ask the question: What next? What will come after reasoning computer programs?

  The optimists see a wonderful new partnership, with humans and the machines that they have created moving together into a future where human manual labor is unknown, while mental activities become a splendid joint endeavor.

  The pessimists point out that computers are only half a century old. In another one or two hundred years they may be able to design improved versions of themselves. At that point humans will have served their evolutionary purpose, as a transition stage on the way to a higher life-form. We can bow out, while computers and their descendants inherit the universe. With luck, maybe a few of us will be kept around as historical curiosities.

  All of this presumes that the development we describe next does not achieve the potential that many people foresee for it.

  10.5 Nanotechnology: the anything machine. Richard Feynman, who is apt to pop up anywhere in the physics of the second half of this century, gave in 1959 a speech that many of his listeners regarded as some kind of joke. It has since come to seem highly prophetic. Feynman noted that whereas the machines we build are all different to a greater or less degree, every electron is identical, as is every proton and every neutron. He suggested that if we built machines one atom at a time, they could be absolutely identical. He also wondered just how small a machine might be made. Could there be electric motors, a millimeter across? If so, then how about a micrometer across? Bacteria are no bigger, and they seem much more complicated than the relatively simple machines that we use in our everyday world.

  Suppose that such minute machines can be built, hardly bigger than a large molecule; further, suppose that they can be made self-replicating, able, like bacteria, to make endless copies of themselves from raw materials in their environment. Finally, suppose that the machines can be programmed, to perform cooperatively any task we choose for them. At a larger scale, Nature has again beaten us to it. The social insects (ants, bees, termites) form a highly cooperative group of individually simple entities, able in combination to accomplish the complex tasks of colony maintenance and reproduction.

  These ideas of tiny, self-replicating, programmable machines were all put together by Eric Drexler, and a name given to the whole concept: nanotechnology. In a book, The Engines of Creation (Drexler, 1986), and in subsequent works, he outlined what myriads of these programmable self-replicating machines might accomplish.

  The list includes flawless production with built-in quality control (misplaced atoms can be detected and replaced or moved); materials with a strength-to-density ratio an order of magnitude better than anything we have today (useful for the exotic space applications of Chapter 8); molecular-level disease diagnosis and tissue repair, of a non-intrusive nature—in other words, we would be unaware of the presence of the machines within our own bodies; and "smart" home service and transportation systems, capable of automatic self-diagnosis and component replacement. When suitable programs have been developed—once only, for each application—all of these things will be available for the price of the raw materials.

  Putting the potential applications together, we seem to have an anything machine. Any item that we can define completely can be built, inexpensively, in large quantities, provided only that the basic materials are inexpensive. Spaceships and aircraft will grow themselves. Household chef units will develop the meals of our choice from basic non-biotic components. Our bodies will have their own built-in health maintenance systems. Build-up of arterial plaque or cholesterol will be prevented; eyes or ears that do not perform perfectly will be modified; digestion will become a monitored process, digestive disorders a thing of the past. We will all enjoy perfect health and a prolonged life expectancy. Perhaps, if the nanomachines are much smaller than the cell level, and can work on our telomeres, we will have the potential to live forever.

  There is, naturally, a dark potential to all this. What happens if the self-replicating machines go out of control? In Cold as Ice, I introduced Fishel's Law and Epitaph: Smart is dumb; it is unwise to build too much intelligence into a self-replicating machine. Greg Bear saw the total end of humanity, when the nanomachines of Blood Music took over (Bear, 1985).

  We cannot say whether a world with fully-developed nanotechnology will be good or bad; what we can say is that it cannot be predicted from today's world. Nanotechnology represents a singularity of the time-line, a point beyond which rational extrapolation is impossible.

  10.6 Artificial life and assisted thought. In 1969, the English mathematician John Horton Conway introduced a paper-and-pencil game with an appropriate name: Life. Given a large sheet of paper marked into small squares, together with a few simple rules, objects can be generated that seem to possess many of the attributes of living organisms. They move across the page, grow, reproduce copies of themselves and of certain other organisms, and kill other organisms by "eating" them. The game was a big success in academic circles, and computer versions of it soon appeared.

  Slowly, through the 1970s and 1980s, the realization grew that computers might also be useful in studying life with a small "l." The behavior of competing organisms could be modeled. Those organisms could then be "released" into a computer environment and allowed to "evolve." The results would provide valuable information about population dynamics.

  But must it stop there? If we take the science fictional next
step, we already pointed out in discussing biological computers that the DNA of any organism can be put into exact correspondence with a string of binary digits. In a real sense, those digits represent the organism. Given the digit string and suitable technology, we could construct the DNA, introduce it into the superstructure of a cell, and grow the organism itself.

  We could do that, but why should we bother? We know that computer circuits operate millions of times faster than our own nerve cells. Couldn't we take various kinds of DNA, representing different organisms, prescribe the rules within the computer for growing the organism, and let the competition for genetic survival run not in the real world, but inside the computer? Maybe we can in that way speed up the process of evolution by a factor of many millions, and watch the emergence of new species in close to real time.

  The practical problems are enormous. There is so much that we do not know about the development of a living creature. The necessary data must certainly be in the DNA, to allow an eye and a kidney and a brain to develop from a single original cell, but we have little idea how this "cellular differentiation" takes place. In fact, it is one of the central mysteries of biology.

  Meanwhile, we look at another possibility. Suppose, as discussed in the previous chapter, we were able to download into a computer the information content of a human brain. If Roger Penrose is right (See Chapter 13), this may not be possible until we have a quantum computer able to match the quantum functions of the human brain; but let us suppose we have that. Now we have something that does not evolve, as DNA representations might evolve, but thinks in the virtual environment provided inside the computer. This is artificial life, of a specific and peculiar kind. For one thing, the speed of computer thought should eventually exceed the speed of our flesh-and-blood wetware by a factor of millions or billions.

 

‹ Prev