Thus, emergence is not a mystical or antiscientific principle, and certainly provides no brief for any kind of assumption or preference that might be called religious (in conventional terms). Emergence is a scientific claim about the physical nature of complex systems. And if emergent principles become more and more important as we mount the scale of complexity in scientific systems, then the reductionistic research program, despite its past triumphs and continuing importance, will fail both as a general claim about the structure of material reality (the hardest version), or as a heuristic proposition about inevitable (or even most fruitful) ways for advancing scientific knowledge (the weaker or methodological version).
2. Contingency. Historical uniqueness has always been a bugbear for classically trained scientists. We cannot deny either the existence or the factuality (yes, the Brits whupped the French at Agincourt in 1415, and the Twin Towers fell on September 11), but we also recognize that no general principle could have predicted the details, and that no law of nature demanded this particular “then and there.” Unique facts that didn’t have to occur, could never have been predicted beforehand (however much we may later explain the outcomes in fine detail), and will never happen exactly again in all their detailed glory make us very uncomfortable indeed. For we must face (and explain) facts as scientists, but this kind of information does not seem to represent science as we usually understand the concept. We can only hope that we don’t need to factor such empirical uniqueness into our explanations very frequently, or in any important way.
And often we are rewarded. Quartz is quartz—and predictably formed when four silicon ions surround each oxygen ion to form a tetrahedron, with each vertex shared between two tetrahedra, yielding the formula SiO2. Our specimen may have formed a billion years ago in Africa, or fifty years ago in a Nevada bomb crater. We can’t even imagine a granting of individuality to the gazillions of tetrahedra in each specimen. Who would dream of contrasting George, the oxygen ion from Africa, with Martha, his counterpart from Arizona?
However, and equally obviously, we do care very much that Tyrannosaurus lived in the western United States and apparently became extinct when a large extraterrestrial object struck the earth 65.3 million years ago, and that Homo sapiens evolved in Africa, spread throughout the world in short order (evolutionarily speaking), and may not survive the next millennium, a mere geological microsecond. The contrast between the quartz and the creatures may be largely factual, but also includes a strong psychological component that we rarely acknowledge with sufficient clarity. Quartz may represent so simple a system that we couldn’t separate George from Martha even if we cared, while a Tyrannosaurus would attract notice in human society, even on that fabled New York subway where no one recognizes a well-dressed Neanderthal. But, in large part, we also don’t generally give much of a damn about the individuality of simple and apparently repeatable systems; what would we gain, either scientifically or socially, if we could pull out that quartz crystal and say to a friend or colleague: “This is George from the African Cambrian”?
Again, I don’t wish to belabor an obvious point (about which I have written ad nauseam, even by my standards). Think what you may about reductionism as a procedure for explanation in science. Whatever it may do, however it may work, whatever its range as a favored mode of science; unique historical events in highly complex systems happen for “accidental” reasons, and cannot be explained by classical reductionism. (I do not mean that a kingdom can’t be lost for want of a horseshoe nail—that is, that we might trace a very complex outcome to a simple initiating trigger. But the trigger itself can only record another contingency, perhaps of a different level or order. We will not explain Agincourt by the physics of the longbow, or September 11 by the neurology of psychopathology in general, not to mention Mr. bin Laden in particular.)
So, if adequate scientific understanding includes the necessary explanation of large numbers of contingent events, then reductionism cannot provide the only light and way. The general principle of ecological pyramids will help me to understand why all ecosystems hold more biomass in prey than predators, but when I want to know why a dinosaur named Tyrannosaurus played the role of top carnivore 65 million years ago in Montana, why a collateral descendent group of birds, called phorusrhacids, nudged out mammals for a similar role in Tertiary South America (at least until the Isthmus of Panama arose and jaguars and their kin moved south), why marsupial thylacines served on the island continent of Australia, and why Ko-Ko both cadged a rhyme and an “in joke” to Katisha when he claimed that he “never saw a tiger from the Congo or the Niger”—well, then I am asking particular questions about history: real and explainable facts to be sure, but only resolvable by the narrative methods of historical analysis, and not by the reductionistic techniques of classical science.
The central importance of contingency as a denial of reductionism in the sciences devoted to understanding human evolution, mentality, and social or cultural organization strikes me as one of the most important, yet least understood, principles of our intellectual strivings. I confess that I have been particularly frustrated by this theme, for the point seems evident and significant to me, and yet I have been singularly unsuccessful in conveying either my understanding or my concern, despite many attempts. Perhaps I am simply wrong (the most obvious resolution, I suppose); but perhaps I have just never figured out how to convey the argument well. Or perhaps—my own arrogant suspicion, I admit—we just don’t want to hear the claim.
My point is simply this: Ever since the psalmist declared us just a little lower than the angels and crowned us with glory and honor, we have preferred to think of Homo sapiens not only as something special (which I surely do not deny), but also as something ordained, necessary, or, at the very least, predictable from some form of general process (a common position, although defended for obviously different reasons, in the long histories of our professions and within the full gamut of our views on human nature and origins, from pure Enlightenment secularism to evangelical special creationism). In terminology that I have often used before, we like to think of ourselves as the apotheosis of a tendency, the end result of some predictable generality, rather than as a fortuitous entity, a single and fully contingent item of life’s history that never had to arise, but fortunately did (at least for the cause of some cognition on the surface of this particular planet, whatever the eventual outcome thereof).
This mistaken view of ourselves as the predictable outcome of a tendency, rather than as a contingent entity, leads us badly astray in many ways far too numerous to mention. But, in the context of this book’s brief for the best way to link science with the humanities, our status as a contingent entity holds special salience as a strong argument against Wilson’s favored solution of conjunction by reductive consilience. Because we so dearly wish to view ourselves as something general, if not actually ordained, we tend to imbue the universal properties of our species—especially the cognitive aspects that distinguish us from all other creatures—with the predictable characteristics of standard scientific generalities. When philosophers, from Antiquity on, have analyzed our modes of thinking, and when scientists, from the beginning of our inquiries, have tried to understand our modes of being, these scholars have generally assumed that any identified universal must, ipso facto, arise from a lawlike principle, finally manifested at the acme of a tendency embodying all the generality of any natural law or necessity of logic. Thus, whatever we do cognitively (and that no other species can accomplish) becomes part of the definition of cognition as a general principle of complex systems. If our most distinctive property of syntax in language displays certain peculiarities throughout our species, then communication in general must so function. If our arts manifest common themes, then universal aesthetics must embody certain rules of color or geometry.
These subtle, almost always unstated (and probably, for the most part, unconscious) assumptions also prompt the interesting consequence—a serious fallacy in my view—of almost
inevitably encouraging a belief that the humanities, if they so embody the only known expressions of phenomena that must represent the highest forms of general and natural tendencies, should be incorporated within science, even though these generalities are, unfortunately for science (which seeks experimental replication above all), expressed in only one species, at least in this world. (This misreading, however, also helps to inspire—and here I mute my criticism because the work can be good, and the questions remain fascinating, whatever the psychological fallacy behind some reasons for the asking—much scientific and semiscientific work loosely coordinated around the theme of trying to make or find another, including attempts to teach language to great apes, work in AI or “artificial intelligence,” and the search for intelligent life on other worlds.)
But if Homo sapiens represents more of a contingent and improbable fact of history than the apotheosis of a predictable tendency, then our peculiarities, even though they be universal within our species, remain more within the narrative realm of the sciences of historical contingency than within the traditional, and potentially reductionist, domain of repeated and predictable natural phenomena generated by laws of nature. And in that case, all the distinctive human properties that feed the practices of the humanities—even the factual aspects that can help us to understand why we feel, paint, build, dance, and write as we do—will, as products of a truly peculiar mind (developed only once on this planet), fall largely into the domain of contingency, and largely outside the style of science that might be subject to Wilson’s kind of subsumption within the reductionist chain.
In any case, and to generalize the obvious point, contingency tends to “grab” more and more of what science needs to know as we mount the conventional reductionist chain from the most “basic” science of small, relatively simple, and universal constituents, to the most complex studies of large, messy, multifaceted systems full of emergent properties based on complex webs of massively nonadditive interactions. And although science can study contingency just as well as any other factual subject, such understanding must be achieved primarily by the different methods of narrative explanation, and not by pure reductive prediction. So, as a general statement with many potential exceptions, the “higher” we mount, the less we can rely on reductionism for the twinned reasons of (1) ever greater influence of emergent principles, and (2) ever greater accumulation of historical accidentals requiring narrative explanations as contingencies. The “topmost” fields of the humanities, whose potential for incorporation within the reductionist chain expresses Wilson’s primary hope and rationale for his book Consilience, seem least likely, for both these reasons, to assume a primary place and definition as the most complex factual systems subject to standard analysis by reductionistic science.
To end with a specific example, the structure of the human genome “met the press” on February 12, 2001. (I will grant some coincidental status to the millennial year, but I know that the choice of both Darwin’s and Lincoln’s birthday—yes, they were born on the same day, not just the same date, in 1809—recorded a smart and conscious decision in our world of media and symbols.) At this briefing, and with full justification, the press, and the public in general, seemed most surprised by the astonishing discovery that our genome only includes some 30,000 genes, whereas the humble laboratory standard, the fruit fly Drosophila, holds half as many, and the far more featureless “worm” of equal laboratory fame, the nematode C. elegans (looking like little more than a tiny tube with a bit of anatomical complexity at the genitalia, but virtually nowhere else) has 19,000 genes.
Before this announcement, most estimates had ranged from 120,000 to 150,000, with one company even advertising a precise number of 142,634, and offering to sell their information on individual sequences of genes with potential medical (and therefore commercial) value. This number seemed entirely reasonable because, in some evident sense that even I would not dream of denying, the greater “complexity” of humans, even over the most elegant of nematodes, does seem to require a far greater variety of building blocks as architecture for the intricate totality—and, in common parlance and understanding, each gene ultimately codes for a protein, and congeries of proteins make bodies. So how can humans be so complex with only half again as many genes as a worm—and we refer here not even to a respectably large and somewhat complex earthworm (of a different phylum), but rather to that tiny and featureless, nearly invisible, laboratory denizen, blessed only with the fine name of C. elegans?
No one knows the answer for sure, but the basic outline seems clear enough. Genes don’t make proteins directly. Rather, they replicate themselves, and they serve as templates for the formation of distinctive RNAs, which then, through a complex chain of events, eventually assemble the vast array of proteins needed to construct a complex human body. One key component of the initial assumption has not, and probably cannot, be challenged. We are, admittedly in some partly subjective sense, far more complex than those blasted worms—and this increment in complexity does require far more components as building blocks. The estimate of 120,000 to 150,000 probably falls in “the right ballpark.”
But this number cites the diversity of proteins needed to construct our complexity, and each protein does indeed require a distinctive RNA message as architect. So the 120,000 to 150,000 messages exist, and our previous error must be attributed to a false assumption in the most linear form of reductionistic thinking: namely, that each final protein can be traced back to a distinctive gene through a single chain of causation and construction that, in the early days of molecular biology, received the designation of a “central dogma” (thus showing, by the way, that scientists maintain a decent sense of humor and a high capacity for self-mockery, here shown by expressing a putative basic truth in a manner honorably recognized as overly simplified): “DNA makes RNA makes protein,” or the concept of one linear chain of causation extending outward from each gene.
We must also admit that a powerful commercial interest backed this simplest idea that each protein records the coding and ultimate action of a single gene. For if a disorder arises from a particular misconstruction in a specific protein, and if we could sequence the gene coding for this protein, then we might learn how to “fix” the gene, correct the protein, and cure the disease. Thus the debate about patenting genes represented no mere academic exercise for a university’s moot court, but rather reflected a driving commercial concern of the large, growing, and highly speculative industry of biotechnology.
Now, of course, whatever one might want to say about scientists, we are not, in general, especially stupid. Just as the very phrase “central dogma” recorded our acknowledgment of a recognized oversimplification, no one ever believed that most diseases would be traced to an easily fixable screw-up in a single protein (although some diseases will be so caused and potentially correctable, and these should be pursued with vigor, provided that we don’t deceive ourselves about general theory, and go further astray for the majority of others than we succeed for these fortunate few). And no one ever thought that the simplest form of pure reductionism—a bunch of independent genes, each creating a different protein, one for one, and without any emergent properties to gum up the simple pathways—would describe the embryological construction of the human body. But we do follow an operational tendency to begin with the simplest and most workable model, and then to follow this style of research as far as we can—and we do often make the common mistake of slipping into an assumption that initial operational efficacy might equal ultimate material reality.
Wilson acknowledges these points, and even invokes another humorous acronym to stress the same self-mockery as the central dogma. He expresses more enthusiasm than I could ever muster for the practical range of simplest one-for-one cases, but he also recognizes the probable greater complexity for elaborate mental traits of his primary interest:Over 1,200 physical and psychological disorders have been tied to single genes. The result is the OGOD principle: One Gene, One Disease. So success
ful is the OGOD approach that researchers joke about the Disease of the Month reported in scientific journals and mainstream media. . . . Researchers and practicing physicians are especially pleased with the OGOD discoveries, because a single gene mutation invariably has a biochemical signature that can be used to simplify diagnosis. . . . Hope also rises that genetic disease can be corrected with magic-bullet therapy, by which one elegant and noninvasive procedure corrects the biomedical defect and erases the symptoms of the disease. For all its early success, however, the OGOD principle can be profoundly misleading when applied to human behavior. While it is true that a mutation in a single gene often causes a significant change in a trait, it does not at all follow that the gene determines the organ or process affected. Typically, many genes contribute to the prescription of each complex biological phenomenon.
So how do 30,000 genes make up to five times as many messages? Obviously, as we knew (but hoped to identify as a rare exception rather than the evident generality), the linear and independent chains of the central dogma bear little relationship to true organic architecture, and each gene must make (or aid in making) far more than one protein, on average. In fact, we have known (and extensively studied) many potential reasons for at least two decades. The original Watson-Crick models did envision the genome as, to cite the common phrase, “beads on a string”—that is, as a linear sequence of genes stacked end to end, one after the other. But, among many other aspects of genetic structure, two properties of genomes especially discredited the simple bead models long ago. First, the vast majority of nucleotides in the genomes of complex organisms don’t code for genes at all, and do not seem to “make” anything of importance to bodies (so-called “junk DNA”). Only one percent or so of the human genome accounts for those circa 30,000 genes. Second, and more important, genes are not discrete chains of nucleotides, but are built in pieces of coding regions (called exons) interspersed with other sequences of nucleotides that do not translate to RNA (called introns). In assembling a gene, the introns are snipped out and the exons joined together to make an RNA from the sequence of conjoined exons. Now, if a gene consists of, say, five exons, we can easily envision several mechanisms for making many different proteins from a single gene. Just consider the two most obvious: either combine the exons in different orders, or leave some of the exons out.
The Hedgehog, the Fox, and the Magister's Pox Page 25