The Blind Watchmaker

Home > Nonfiction > The Blind Watchmaker > Page 23
The Blind Watchmaker Page 23

by Richard Dawkins


  3. The origin of life is a sufficiently probable event that it tends to arise about once per solar system (in our solar system Earth is the lucky planet).

  These three statements represent three benchmark views about the uniqueness of life. The actual uniqueness of life probably lies somewhere between the extremes represented by Statement 1 and Statement 3. Why do I say that? Why, in particular, should we rule out a fourth possibility, that the origin of life is a far more probable event than is suggested by Statement 3? It isn’t a strong argument, but, for what it is worth, it goes like this. If the origin of life were a much more probable event than is suggested by the Solar System Number we should expect, by now, to have encountered extraterrestrial life, if not in (whatever passes for) the flesh, at least by radio.

  It is often pointed out that chemists have failed in their attempts to duplicate the spontaneous origin of life in the laboratory. This fact is used as if it constituted evidence against the theories that those chemists are trying to test. But actually one can argue that we should be worried if it turned out to be very easy for chemists to obtain life spontaneously in the test-tube. This is because chemists’ experiments last for years not thousands of millions of years, and because only a handful of chemists, not thousands of millions of chemists, are engaged in doing these experiments. If the spontaneous origin of life turned out to be a probable enough event to have occurred during the few man-decades in which chemists have done their experiments, then life should have arisen many times on Earth, and many times on planets within radio range of Earth. Of course all this begs important questions about whether chemists have succeeded in duplicating the conditions of the early Earth but, even so, given that we can’t answer these questions, the argument is worth pursuing.

  If the origin of life were a probable event by ordinary human standards, then a substantial number of planets within radio range should have developed a radio technology long enough ago (bearing in mind that radio waves travel at 186,000 miles per second) for us to have picked up at least one transmission during the decades that we have been equipped to do so. There are probably about 50 stars within radio range if we assume that they have had radio technology for only as long as we have. But 50 years is just a fleeting instant, and it would be a major coincidence if another civilization were so closely in step with us. If we embrace in our calculation those civilizations that had radio technology 1,000 years ago, there will be something like a million stars within radio range (together with however many planets circle round each one of them). If we include those whose radio technology goes back 100,000 years, the whole trillion-star galaxy would be within radio range. Of course, broadcast signals would become pretty attenuated over such huge distances.

  So we have arrived at the following paradox. If a theory of the origin of life is sufficiently ‘plausible’ to satisfy our subjective judgement of plausibility, it is then too ‘plausible’ to account for the paucity of life in the universe as we observe it. According to this argument, the theory we are looking for has got to be the kind of theory that seems implausible to our limited, Earth-bound, decade-bound imaginations. Seen in this light, both Cairns-Smith’s theory and the primeval-soup theory seem if anything in danger of erring on the side of being too plausible! Having said all this I must confess that, because there is so much uncertainty in the calculations, if a chemist did succeed in creating spontaneous life I would not actually be disconcerted!

  We still don’t know exactly how natural selection began on Earth. This chapter has had the modest aim of explaining only the kind of way in which it must have happened. The present lack of a definitely accepted account of the origin of life should certainly not be taken as a stumbling block for the whole Darwinian world view, as it occasionally — probably with wishful thinking — is. The earlier chapters have disposed of other alleged stumbling blocks, and the next chapter takes up yet another one, the idea that natural selection can only destroy, never construct.

  CHAPTER 7

  Constructive evolution

  People sometimes think that natural selection is a purely negative force, capable of weeding out freaks and failures, but not capable of building up complexity, beauty and efficiency of design. Does it not merely subtract from what is already there, and shouldn’t a truly creative process add something too? One can partially answer this by pointing to a statue. Nothing is added to the block of marble. The sculptor only subtracts, but a beautiful statue emerges nevertheless. But this metaphor can mislead, for some people leap straight to the wrong part of the metaphor — the fact that the sculptor is a conscious designer — and miss the important part: the fact that the sculptor works by subtraction rather than addition. Even this part of the metaphor should not be taken too far. Natural selection may only subtract, but mutation can add. There are ways in which mutation and natural selection together can lead, over the long span of geological time, to a building up of complexity that has more in common with addition than with subtraction. There are two main ways in which this build-up can happen. The first of these goes under the name of ‘coadapted genotypes’; the second under the name of ‘arms races’. The two are superficially rather different from one another, but they are united under the headings of ‘coevolution’ and ‘genes as each others’ environments’.

  First, the idea of ‘coadapted genotypes’. A gene has the particular effect that it does only because there is an existing structure upon which to work. A gene can’t affect the wiring up of a brain unless there is a brain being wired up in the first place. There won’t be a brain being wired up in the first place, unless there is a complete developing embryo. And there won’t be a complete developing embryo unless there is a whole program of chemical and cellular events, under the influence of lots and lots of other genes, and lots and lots of other, non-genetic, causal influences. The particular effects that genes have are not intrinsic properties of those genes. They are properties of embryological processes, existing processes whose details may be changed by genes, acting in particular places and at particular times during embryonic development. We saw this message demonstrated, in elementary form, by the development of the computer biomorphs.

  In a sense, the whole process of embryonic development can be looked upon as a cooperative venture, jointly run by thousands of genes together. Embryos are put together by all the working genes in the developing organism, in collaboration with one another. Now comes the key to understanding how such collaborations come about. In natural selection, genes are always selected for their capacity to flourish in the environment in which they find themselves. We often think of this environment as the outside world, the world of predators and climate. But from each gene’s point of view, perhaps the most important part of its environment is all the other genes that it encounters. And where does a gene ‘encounter’ other genes? Mostly in the cells of the successive individual bodies in which it finds itself. Each gene is selected for its capacity to cooperate successfully with the population of other genes that it is likely to meet in bodies.

  The true population of genes, which constitutes the working environment of any given gene, is not just the temporary collection that happens to have come together in the cells of any particular individual body. At least in sexually reproducing species, it is the set of all genes in the population of interbreeding individuals — the gene ‘pool’. At any given moment, any particular copy of a gene, in the sense of a particular collection of atoms, must be sitting in one cell of one individual. But the set of atoms that is any one copy of a gene is not of permanent interest. It has a life-expectancy measured only in months. As we have seen, the long-lived gene as an evolutionary unit is not any particular physical structure but the textual archival information that is copied on down the generations. This textual replicator has a distributed existence. It is widely distributed in space among different individuals, and widely distributed in time over many generations. When looked at in this distributed way, any one gene can be said to ‘meet’ another whe
n they find themselves sharing a body. It can ‘expect’ to meet a variety of other genes in different bodies at different times in its distributed existence, and in its march through geological time. A successful gene will be one that does well in the environments provided by these other genes that it is likely to meet in lots of different bodies. ‘Doing well’ in such environments will turn out to be equivalent to ‘collaborating’ with these other genes. It is most directly seen in the case of biochemical pathways.

  Biochemical pathways are sequences of chemicals that constitute successive stages in some useful process, like the release of energy or the synthesis of an important substance. Each step in the pathway needs an enzyme — one of those large molecules that is shaped to act like a machine in a chemical factory. Different enzymes are needed for different steps in the chemical pathway. Sometimes there are two, or more, alternative chemical pathways to the same useful end. Although both pathways culminate in the identical useful result, they have different intermediate stages leading up to that end, and they normally have different starting points. Either of the two alternative pathways will do the job, and it doesn’t matter which one is used. The important thing for any particular animal is to avoid trying to do both at once, for chemical confusion and inefficiency would result.

  Now suppose that Pathway 1 needs the succession of enzymes A1, B1 and C1, in order to synthesize a desired chemical D, while Pathway 2 needs enzymes A2, B2 and C2 in order to arrive at the same desirable end-product. Each enzyme is made by a particular gene. So, in order to evolve the assembly line for Pathway 1, a species needs the genes coding for A1, B1 and C1 all to coevolve together. In order to evolve the alternative assembly line for Pathway 2, a species would need the genes coding for A2, B2 and C2 to coevolve with one another. The choice between these two coevolutions doesn’t come about through advance planning. It comes about simply through each gene being selected by virtue of its compatibility with the other genes that already happen to dominate the population. If the population happens to be already rich in genes for B1 and C1, this will set up a climate favouring the A1 gene rather than the A2 gene. Conversely, if the population is already rich in genes for B2 and C2 this will set up a climate in which the A2 gene is favoured by selection rather than the A1 gene.

  It will not be as simple as that, but you will have got the idea: one of the most important aspects of the ‘climate’ in which a gene is favoured or disfavoured is the other genes that are already numerous in the population; the other genes, therefore, with which it is likely to have to share bodies. Since the same will obviously be true of these ‘other’ genes themselves, we have a picture of teams of genes all evolving towards cooperative solutions to problems. The genes themselves don’t evolve, they merely survive or fail to survive in the gene pool. It is the ‘team’ that evolves. Other teams might have done the job just as well, or even better. But once one team has started to dominate the gene pool of a species it thereby has an automatic advantage. It is difficult for a minority team to break in, even a minority team which would, in the end, have done the job more efficiently. The majority team has an automatic resistance to being displaced, simply by virtue of being in the majority. This doesn’t mean that the majority team can never be displaced. If it couldn’t, evolution would grind to a halt. But it does mean that there is a kind of built-in inertia.

  Obviously this kind of argument is not limited to biochemistry. We could make the same kind of case for clusters of compatible genes building the different parts of eyes, ears, noses, walking limbs, all the cooperating parts of an animal’s body. Genes for making teeth suitable for chewing meat tend to be favoured in a ‘climate’ dominated by genes making guts suitable for digesting meat. Conversely, genes for making plant-grinding teeth tend to be favoured in a climate dominated by genes that make guts suitable for digesting plants. And vice versa in both cases. Teams of ‘meat-eating genes’ tend to evolve together, and teams of ‘plant-eating genes’ tend to evolve together. Indeed, there is a sense in which most of the working genes in a body can be said to cooperate with each other as a team, because over evolutionary time they (i.e. ancestral copies of themselves) have each been part of the environment in which natural selection has worked on the others. If we ask why the ancestors of lions took to meat-eating, while the ancestors of antelopes took to grass-eating, the answer could be that originally it was an accident. An accident, in the sense that it could have been the ancestors of lions that took up grass-eating, and the ancestors of antelopes that took up meat-eating. But once one lineage had begun to build up a team of genes for dealing with meat rather than grass, the process was self-reinforcing. And once the other lineage had begun to build up a team of genes for dealing with grass rather than meat, that process was self-reinforcing in the other direction.

  One of the main things that must have happened in the early evolution of living organisms was an increase in the numbers of genes participating in such cooperatives. Bacteria have far fewer genes than animals and plants. The increase may have come about through various kinds of gene duplication. Remember that a gene is just a length of coded symbols, like a file on a computer disc; and genes can be copied to different parts of the chromosomes, just as files can be copied to different parts of the disc. On my disc that holds this chapter there are officially just three files. By ‘officially’ I mean that the computer’s operating system tells me that there are just three files. I can ask it to read one of these three files, and it presents me with a one-dimensional array of alphabetical characters, including the characters that you are now reading. All very neat and orderly, it seems. But in fact, on the disc itself, the arrangement of the text is anything but neat and orderly. You can see this if you break away from the discipline of the computer’s own official operating system, and write your own private programs to decipher what is actually written on every sector of the disc. It turns out that fragments of each of my three files are dotted around, interleaved with each other and with fragments of old, dead files that I erased long ago and had forgotten. Any given fragment may turn up, word for word the same, or with minor differences, in half a dozen different places all around the disc.

  The reason for this is interesting, and worth a digression because it provides a good genetic analogy. When you tell a computer to delete a file, it appears to obey you. But it doesn’t actually wipe out the text of that file. It simply wipes out all pointers to that file. It is as though a librarian, ordered to destroy Lady Chatterley’s Lover, simply tore up the card from the card index, leaving the book itself on the shelf. For the computer, this is a perfectly economical way to do things, because the space formerly occupied by the ‘deleted’ file is automatically available for new files, as soon as the pointers to the old file have been removed. It would be a waste of time actually to go to the trouble of filling the space itself with blanks. The old file won’t itself be finally lost until all its space happens to be used for storing new files.

  But this re-using of space occurs piecemeal. New files aren’t exactly the same size as old ones. When the computer is trying to save a new file to a disc, it looks for the first available fragment of space, writes as much of the new file as will fit, then looks for another available fragment of space, writes a bit more, and so on until all the file is written somewhere on the disc. The human has the illusion that the file is a single, orderly array, only because the computer is careful to keep records ‘pointing’ to the addresses of all the fragments dotted around. These ‘pointers’ are like the ‘continued on page 94’ pointers used by the New York Times. The reason many copies of any one fragment of text are found on a disc is that if, like all my chapters, the text has been edited and re-edited many dozens of times, each edit will result in a new saving to the disc of (almost) the same text. The saving may ostensibly be a saving of the same file. But as we have seen, the text will in fact be repeatedly scattered around the available ‘gaps’ on the disc. Hence multiple copies of a given fragment of text can be found all around
the surface of the disc, the more so if the disc is old and much used.

  Now the DNA operating system of a species is very very old indeed, and there is evidence that it, seen in the long term, does something a bit like the computer with its disc files. Part of the evidence comes from the fascinating phenomenon of ‘introns’ and ‘exons’. Within the last decade, it has been discovered that any ‘single’ gene, in the sense of a single continuously read passage of DNA text, is not all stored in one place. If you actually read the code letters as they occur along the chromosome (i.e. if you do the equivalent of breaking out of the discipline of the ‘operating system’) you find fragments of ‘sense’, called exons, separated by portions of ‘nonsense’ called introns. Any one ‘gene’ in the functional sense, is in fact split up into a sequence of fragments (exons) separated by meaningless introns. It is as if each exon ended with a pointer saying ‘continued on page 94’. A complete gene is then made up of a whole series of exons, which are actually strung together only when they are eventually read by the ’official’ operating system that translates them into proteins.

  Further evidence comes from the fact that the chromosomes are littered with old genetic text that is no longer used, but which still makes recognizable sense. To a computer programmer, the pattern of distribution of these ‘genetic fossil’ fragments is uncannily reminiscent of the pattern of text on the surface of an old disc that has been much used for editing text. In some animals, a high proportion of the total number of genes is in fact never read. These genes are either complete nonsense, or they are outdated ‘fossil genes’.

  Just occasionally, textual fossils come into their own again, as I experienced when writing this book. A computer error (or, to be fair, it may have been human error) caused me accidentally to ‘erase’ the disc containing Chapter 3. Of course the text itself hadn’t literally all been erased. All that had been definitely erased were the pointers to where each ‘exon’ began and ended. The ‘official’ operating system could read nothing, but ‘unofficially’ I could play genetic engineer and examine all the text on the disc. What I saw was a bewildering jigsaw puzzle of textual fragments, some of them recent, others ancient ‘fossils’. By piecing together the jigsaw fragments, I was able to recreate the chapter. But I mostly didn’t know which fragments were recent and which were fossil. It didn’t matter for, apart from minor details that necessitated some new editing, they were the same. At least some of the ‘fossils’, or outdated ‘introns’, had come into their own again. They rescued me from my predicament, and saved me the trouble of rewriting the entire chapter.

 

‹ Prev