A Brief History of Creation

Home > Science > A Brief History of Creation > Page 19
A Brief History of Creation Page 19

by Bill Mesler


  By 1922, Oparin was presenting those new ideas on the subject of the origin of life to Soviet scientific bodies. In 1924, he sat down to write a book that would lay out what had by then blossomed into a grand theory. Like Haldane, Oparin tackled the problem in a way that was fundamentally different from the approach his scientific predecessors had taken. Scientists like Thomas Huxley or Henry Bastian had worked on the assumption that the first life appeared on an Earth not so different from the Earth as it still was. And this process, they assumed, had happened rather quickly.

  Oparin and Haldane, on the other hand, were searching for answers about something that had happened, they believed, at least many hundreds of millions of years in the past on a planet that could scarcely be recognized and under conditions they could only creatively imagine. But both men had a great deal of evidence that simply had not been available to earlier evolutionists. Even though the study of the origin of life had stagnated for the previous four decades, the understanding of the conditions under which the first life appeared had changed dramatically. For the first time, scientists were beginning to appreciate that the Earth was much older than any of their predecessors had imagined, and that life had existed on the planet throughout most of its history.

  TIME HAD ALWAYS BEEN an enigma for Charles Darwin. He believed that the pace of evolution based on natural selection was extraordinarily slow, with species being transformed inch by inch through countless generations full of evolutionary dead ends and long periods of stagnation. It was hard to account for the evolution of the simplest microorganisms into complex species like human beings in a time frame that people could accept.

  The problem persisted, even though Buffon’s once-radical guess at the Earth’s age seemed absurdly timid by Darwin’s era. In the first edition of On the Origin of Species, Darwin had given his own reckoning of the Earth’s age. Like the estimates arrived at by Buffon and Ussher, Darwin’s figure was exact almost to the point of being silly. He estimated that the Earth was 306,662,400 years old, a figure based on his assessment, from geological clues, of the age of southern England.

  Darwin’s estimate drew the scrutiny of the Irish physicist William Thompson, usually remembered by the title he acquired later in life, Lord Kelvin. Kelvin was one of the most accomplished and publicly revered scientists of the age. His role in building the first transatlantic telegraph had brought him enormous fame, fantastic wealth, and ennoblement. He had also helped formulate the first and second laws of thermodynamics, which he had used to produce his own estimate of the age of the Earth. Like Buffon’s estimate, Kelvin’s was based on how long it would have taken the Earth to cool to its present temperature. Because he was unaware of the process of radioactive decay, which accounts for much of the heat generated below the Earth’s surface, Kelvin assumed that the Earth was a rigid sphere that had been cooling since its inception, and that he could judge its age by comparing the Earth’s exterior temperature to those taken from its interior.

  Three years after the publication of Origin, Kelvin postulated that the Earth was between twenty million and four hundred million years old, but he revised the estimate downward in the coming years, largely to correspond with his much lower—and now recognized as vastly wrong—estimates of the age of the sun. By 1897, he had settled on twenty to forty million years—“much nearer 20 than 40.” Thomas Huxley attacked Kelvin’s methods as faulty, but even Darwin’s own son, the astronomer George Darwin, had put forth a relatively low estimate of fifty-six million years, which he based on his calculation of how long it would have taken the Earth to settle into its current twenty-four-hour cycle of daily rotation. The Earth’s age remained an important—and strongly contested—subject of debate through the end of the nineteenth century.

  In the face of so much dispute about the true figure, Charles Darwin removed the reference to the Earth’s age from the second and all future editions of Origin. The question puzzled him throughout his life and hindered acceptance of the slow process of natural selection as the principal agent of evolution. Even Darwin’s staunchest supporters conceded that natural selection would likely have required hundreds of millions of years, but such a time frame was difficult to reconcile with the best estimates of how long evolution would have had.

  Then, a discovery in France set in motion a chain of events that upset all the assumptions about the Earth’s age. In 1896, a year before Kelvin gave his final estimate, a French physicist named Henri Becquerel happened to leave a packet of uranium salts on a Lumière photographic negative. Later, Becquerel returned to find an image of the packet burned onto the negative as if it had been photographed. By placing objects between his uranium and the negative, he found he could produce images of anything. The only conclusion he could draw was that invisible rays of energy were being emitted from the rock. Three years later, Marie Curie discovered the elements polonium and radium. She coined the term “radioactivity” to describe the mysterious energy they emitted.

  In a remarkably short period of time after the discovery of radioactivity, physicists developed methods to measure the age of rocks based on the decay of radioactive elements. Every rock is made up of chemical elements, some of which are present as a mixture of isotopes, atoms of the same element that have different numbers of neutrons in their nuclei. Some isotopes are unstable and radioactive, and they are constantly, albeit slowly, decomposing into new, lighter elements. The length of time it takes for half of those molecules to decay into a new element is called half-life. Though each rock is initially endowed with some ratio of the isotopes of its various component elements, over time some of these undergo radioactive decay to give new ratios. By measuring these ratios, geologists learned to calculate how much time had passed since the rock had formed. The process came to be called “radiometric dating.”

  In 1907, the chemist Bertram Boltwood published the results of a radiometric study of twenty-six rocks, one of which he found to be a staggering 570 million years old. As radiometric techniques were refined, the age of Boltwood’s oldest rock increased to 1.3 billion years. Other geologists were finding rocks even more ancient, including one from Ceylon that was 1.6 billion years old. It took until the middle of the twentieth century for most scientists to agree on a figure for the age of the Earth of about 4.5 billion years. Even by the time of Oparin’s return to Moscow, most scientists understood that the Earth was vastly older than anyone a century earlier would have imagined it to be.

  Yet the question still remained of just how long life had existed on the Earth. Huxley had postulated that abiogenesis was extremely rare, something that may have happened only once and only by a confluence of conditions and chance that made it extremely unlikely to happen again. It was possible that Earth had been lifeless for most of its existence. This was, in fact, exactly what the incomplete fossil record seemed to suggest.

  During the first half of the nineteenth century, geologists had to make do with fossils found in irregular settings caused by the coming together of ideal geological circumstances, and then only those exposed at the surface were easily examined, such as the fossils Darwin had found on the volcanic island of St. Jago. The industrial revolution began to change all that. As long canals were constructed throughout the British Isles, connecting ports and coal-mining regions to inland industrial centers, geologists were left with deep, clean gashes into the Earth exposing strata that had accumulated over eons. They began to appreciate that certain fossils were always found in certain stratigraphic layers, and never in others. They didn’t yet know how old those layers were—such knowledge would come only with radiometric dating—but they did understand that certain layers were older than others.

  Eventually, they divided the time represented by the different layers into two long eons. The shorter and more recent was called the Phanerozoic eon, Greek for the “age of visible life.” The Phanerozoic was subdivided into even shorter geological periods, the earliest of which was called the Cambrian, a name coined by Adam Sedgwick after “Cambria,�
�� the Latin name for Wales, where many of the first samples from that period were found. The Phanerozoic was, in turn, preceded by the older and vastly longer—and less imaginatively named—Precambrian eon.

  When Darwin wrote Origin, all of the fossils scientists had acquired came from the shorter, more recent Phanerozoic eon, which we now know comprises a mere 15 percent of the history of the Earth. He addressed his dilemma in Origin: “If the theory [evolution] be true, it is indisputable that before the lowest Cambrian stratum was deposited, long periods elapsed . . . and that during these vast periods, the world swarmed with living creatures . . . why we do not find rich fossiliferous deposits belonging to these assumed earliest periods prior to the Cambrian system, I can give no satisfactory answer. The case at present must remain inexplicable.” The answer would eventually be found nearly a half century later and half a globe away, in the United States, where a young geologist named Charles Doolittle Walcott was fast becoming the most important fossil hunter in the world.

  Raised in Rhode Island by a single mother, Walcott never finished high school and never spent a day in college. As a teenager, he had become a professional fossil collector, selling his finds to universities akin to the way Alfred Russel Wallace had once supported himself by collecting live specimens. At age twenty-six, Walcott was hired as the assistant to the chief geologist of the state of New York, James Hall, a man as famous for his tyrannical disposition as his expertise in paleontology—and he was very famous for his paleontology.

  Hall let Walcott in on one of his most intriguing discoveries: a strange-looking reef in a riverbed near the town of Saratoga, decorated by round patterns imprinted on limestone, each about a meter wide. Hall was convinced that the shapes were biological in origin and that they had been left by colonies of millions of microscopic algae. He called his hypothesized microbes Cryptozoon, “hidden life.” The trouble was that he didn’t have an actual fossil. Even in the twenty-first century, identifying a microscopic fossil is a painstakingly difficult task. Microbial cells are not that different in shape and size from all manner of natural nonliving particles. Since they have no skeleton, they do not fossilize well. Debate often hangs on contextual clues such as the type of environment where the surrounding rock was deposited and the ratios of various isotopes in elements such as carbon and sulfur, which can hint at the hand of biology. While modern micropaleontologists have sophisticated equipment for determining whether fossils are biological in origin, for Walcott and his contemporaries these techniques did not yet exist. There was little scientific acceptance of the evidence for Cryptozoon being of living origin. Walcott needed better microscopic evidence.

  Three years later, with Hall’s recommendation, Walcott was appointed to the newly formed US Geological Survey (USGS). Soon he was heading west to explore one of the greatest natural wonders of North America, the Grand Canyon, about which remarkably little was then known.

  Charles Doolittle Walcott at the Grand Canyon.

  The Grand Canyon turned out to be a paleontologist’s dream. For seventeen million years, the Colorado River had chiseled a course deep into the hard, rocky ground, leaving a majestic gash 277 miles long and over a mile deep. Though it is second in size to Nepal’s Kali Gandaki Gorge, the Grand Canyon’s bareness made it unrivaled in potential for study by a paleontologist like Walcott. Its features were not hidden by rich vegetation as they were in the Kali Gandaki, or even the smaller foothills of England and Scotland that had been explored by most of the century’s greatest fossil hunters. Its walls resembled the clean, layered faces of a canal, except it was a canal that cut a mile beneath the surface and through two billion years of rock formations.

  The leader of the USGS expedition, John Wesley Powell, realized the Grand Canyon’s potential as an archaeological site. He put Walcott to work doing what he did best, hunting for fossils. Walcott soon found signs of life similar to Hall’s Cryptozoon. More significant, he found them in what were almost certainly Precambrian rocks. In 1891, Walcott wrote that “there can be little, if any, doubt” that life indeed existed in Precambrian seas, but not until 1899 did he find the definitive evidence he was looking for. Twenty years after first being intrigued by Hall’s Cryptozoon, Walcott discovered in the Grand Canyon the fossilized remains of microscopic, single-celled algae that he named Chuaria after the rock strata in which they were found. Though his discovery remained controversial into the twentieth century, Chuaria-like fossils have now been dated to as far back as 1.6 billion years. Walcott had finally found the answer to Darwin’s missing piece of the fossil record. Eventually, even older fossils would be found, and scientists would come to accept that simple life-forms had existed for at least 3.5 billion years of the Earth’s 4.5-billion-year history.

  BY THE TIME Oparin and Haldane set about formulating their theories of the origin of life, they understood that the Earth was vastly older than any of their predecessors could have imagined. This was a crucial point because it meant that the environment of the planet when life first arose was probably nothing like that of the modern world. Oparin and Haldane could throw out the old assumptions about spontaneous generation—that any appearance of life from nonlife should be repeatable in an environment that would now be familiar to us—and instead speculate on what kind of world it would have taken to produce life.

  As Oparin set about working out his theories in the 1920s, and particularly in his 1936 book, he could draw on those facts to paint a new picture of a young Earth as it existed hundreds of millions or even billions of years earlier. It was an Earth so vastly different that it might as well have been an alien planet, its atmosphere in particular would have been different.

  Figuring out which elements had to be present was the easy part. Broken down to their most basic chemical parts, all living things are remarkably similar. They are also remarkably simple. From the smallest bacteria to the cells of the most complex species, living organisms are made primarily of carbon, hydrogen, oxygen, and nitrogen, the four basic elements of life that chemists often refer to by the acronym CHON. Other elements are found in trace amounts, most important among them sulfur and phosphorus, but about 98 percent of every living thing is made up by weight of the four elements C, H, O, and N.‡ Each of those elements was almost certainly abundant on Earth, and just about everywhere else. They are, in fact, four of the seven most common elements found in the universe.

  The hard part was understanding how these elements combined to form the more complex molecules required for life. The CHON elements may have been present, but their form on the primitive Earth was still an open question. Was the oxygen present only in water (H2O), or was it also free as O2 gas in the atmosphere, as is true of the modern Earth? To understand how life may have come about required first understanding what kinds of chemical compounds were available at the time.

  Oparin began with an assumption that there was no free oxygen gas in the primitive atmosphere. From astronomical observations that had been made of Jupiter, Oparin deduced that the early Earth’s atmosphere was filled with methane and ammonia. It was also an environment bathed in external energy, bombarded from above by cosmic rays and ultraviolet radiation, unchecked by the modern Earth’s ozone layer. The surface was wracked by constant volcanic activity far beyond anything experienced today. Excited by the bombardment of solar radiation and heated by the energy released by volcanoes, the atmospheric gases would have broken down into their constituent parts. These would have recombined into new compounds, some of which would have dissolved into the vast seas that covered most of the planet. This long chain of chemical events would have led to the synthesis of organic compounds and, eventually, to some sort of precellular structure that represented an intermediate stage between nonlife and life. Haldane’s vision of the early Earth was strikingly similar, and the areas of consensus between the two men’s theories formed the basis of the Oparin-Haldane hypothesis.

  In the decades to follow, as scientists learned more about the geological and astronomical condi
tions that had existed when life first appeared on Earth, several elements of the Oparin-Haldane hypothesis proved remarkably resilient. Geochemistry, the study of the ways the fundamental laws of chemistry can be used to explain planetary processes, would prove by the end of the century that the early Earth’s atmosphere did not, in fact, contain much oxygen—a condition that lasted for almost two billion years after the Earth’s formation, until biology invented oxygen-generating photosynthesis. And the lack of oxygen meant that the atmosphere would have had little ozone, leaving the Earth unprotected from ultraviolet radiation from the sun.

  This last fact, the high flux of energy, was extremely important in both Haldane’s and Oparin’s theories. It was the driving force for the natural synthesis of organic compounds. The compounds intermingled, forming simple molecular aggregates. These were simpler than any single-celled organism we would know of today, but complex enough to convert organic compounds into more copies of themselves. Some of them attained enough complexity that Haldane called them “half-living.” Oparin called these molecular aggregates “coacervates.”

  At this point, Haldane’s and Oparin’s visions diverged in ways that would be increasingly significant in the decades ahead. Each man had a different idea of what made something living. Oparin saw the key as cellular metabolism, the collection of chemical reactions that transform external foodstuff into living material. Life for him was a chemical process, and its essential components were proteins that helped these processes occur. This school of thought came to be known as the “metabolism first” tradition.

  For Haldane, the key to life lay in the gene. His concept of an intermediate stage between life and nonlife was influenced by the phenomenon of viruses, which scientists then understood in only a rudimentary fashion.§ Considerably smaller than bacteria, viruses would not be seen under microscopes until 1933, and there was considerable disagreement as to whether or not viruses were, in fact, living. Haldane was particularly intrigued by something called a bacteriophage, a virus that infected bacteria and seemed to Haldane to hold characteristics that might lead it to be called “half-living.” In 1915, the French-Canadian microbiologist Félix d’Herelle was struggling to understand why water from India’s Ganges and Yamuna Rivers seemed to have the remarkable trait of being able to protect people from cholera. Both rivers were filthy with sewage and teemed with harmful bacteria. D’Herelle found that the rivers also contained a remarkable “bacterium eater,” which some observers soon claimed possessed the ability to self-replicate within cells.

 

‹ Prev