But suppose Canning had killed Castlereagh, rather than just removing a button from his coat? Would FitzRoy have developed so clear a premonition about his own potential troubles without the terrible example of his beloved uncle’s suicide during his most impressionable years (FitzRoy was seventeen when Castlereagh died)? Would Darwin have secured his crucial opportunity if Canning’s bullet had been on the mark?
Tragically, FitzRoy’s premonition eventually came to pass in almost eerie consonance with his own nightmare and memory of Castlereagh. FitzRoy’s later career had its ups and downs. He suffered from several bouts of prolonged depression, accompanied by increasing suspicion and paranoia. In his last post, FitzRoy served as chief of the newly formed Meteorological Office and became a pioneer in weather forecasting. FitzRoy is much admired today for his cautious and excellent work in a most difficult field. But he encountered severe criticism during his own tenure, and for the obvious reason. Weathermen take enough flak today for incorrect predictions. Imagine the greater uncertainties more than a century ago. FitzRoy was stung by criticism of his imprecision. With a healthy mind, he would have parried the blows and come out fighting. But he sank into even deeper despair and eventually committed suicide by slitting his throat on April 20, 1865. Darwin mourned for his former friend (and more recent enemy of evolution), noting the fulfillment of the prophecy that had fostered his own career: “His end,” Darwin wrote, “was a melancholy one, namely suicide, exactly like that of his uncle Ld. Castlereagh, whom he resembled closely in manner and appearance.”
9. Finally, the other short and obvious statement: We must reject the self-serving historical myth that Darwin simply “saw” evolution in the raw when he broke free from the constraints of his culture and came face to face with nature all around the world. Darwin, in fact, did not become an evolutionist until he returned to England and struggled to make sense of what he had observed in the light of his own heritage: of Adam Smith, William Wordsworth, and Thomas Malthus, among others. Nonetheless, without the stimulus of the Beagle, I doubt that Darwin would have concerned himself with the origin of species or even entered the profession of science at all. Five years aboard the Beagle did serve as the sine qua non of Darwin’s revolution in thought.
My chain of argument runs in two directions from George Canning’s left buttock: on one branch, to Castlereagh’s survival, his magnanimous approach to the face-saving Treaty of Ghent, the consequent good feeling that made the Battle of New Orleans a heroic conquest rather than a bitter joke, to Andrew Jackson’s emergence as a military hero and national figure ripe for the presidency; on the other branch, to Castlereagh’s survival and eventual death by his own hand, to the example thus provided to his similarly afflicted nephew Robert FitzRoy, to FitzRoy’s consequent decision to take a social companion aboard the Beagle, to the choice of Darwin, to the greatest revolution in the history of biological thought. The duel on Putney Heath branches out in innumerable directions, but one leads to Jackson’s presidency and the other to Darwin’s discovery.
I don’t want to push this style of argument too far, and this essay is meant primarily as comedy (however feeble the attempt). Anyone can set out a list of contrary proposals. Jackson was a tough customer and might have made his way to the top without a boost from New Orleans. Perhaps FitzRoy didn’t need the drama of Castlereagh’s death to focus a legitimate fear for his own sanity. Perhaps Darwin was so brilliant, so purposeful, and so destined that he needed no larger boost from nature than a beetle collection in an English parsonage.
No connections are certain (for we cannot perform the experiment of replication), but history presents, as its primary fascination, this feature of large and portentous movements arising from tiny quirks and circumstances that appear insignificant at the time but cascade into later, and unpredictable, prominence. The chain of events makes sense after the fact, but would never occur in the same way again if we could rerun the tape of time.
I do not, of course, claim that history contains nothing predictable. Many broad directions have an air of inevitability. A theory of evolution would have been formulated and accepted, almost surely in the mid-nineteenth century, if Charles Darwin had never been born, if only for the simple reason that evolution is true, and not so veiled from our sight (and insight) that discovery could long have tarried behind the historical passage of cultural barriers to perception.
But we are creatures of endless and detailed curiosity. We are not sufficiently enlightened by abstractions devoid of flesh and bones, idiosyncrasies and curiosities. We cannot be satisfied by concluding that a thrust of Western history, and a dollop of geographic separation, virtually guaranteed the eventual independence of the United States. We want to know about the tribulations at Valley Forge, the shape of the rude bridge that arched the flood at Concord, the reasons for crossing out “property” and substituting “pursuit of happiness” in Jefferson’s great document. We care deeply about Darwin’s encounter with Galápagos tortoises and his studies of earthworms, orchids, and coral reefs, even if a dozen other naturalists would have carried the day for evolution had Canning killed Castlereagh, FitzRoy sailed alone, and Darwin become a country parson. The details do not merely embellish an abstract tale moving in an inexorable way. The details are the story itself; the underlying predictability, if discernible at all, is too nebulous, too far in the background, and too devoid of hooks upon actual events to count as an explanation in any satisfying sense.
Darwin, that great beneficiary of a thousand chains of improbable circumstance, came to understand this principle and to grasp thereby the essence of history in its largest domain of geology and life. When America’s great Christian naturalist Asa Gray told Darwin that he was prepared to accept the logic of natural selection but recoiled at the moral implications of a world without divine guidance, Darwin cited history as a resolution. Gray, in obvious distress, had posed the following argument: Science implies lawfulness; laws (like the principle of natural selection) are instituted by God to ensure his benevolent aims in the results of nature; the path of history, however full of apparent sorrow and death, must therefore include purpose. Darwin replied that laws surely exist and that, for all he knew, they might well embody a purpose legitimately labeled divine. But, Darwin continued, laws only regulate the broad outlines of history, “with the details, whether good or bad, left to the working out of what we may call chance.” (Note Darwin’s careful choice of words. He does not mean “random” in the sense of uncaused; he speaks of events so complex and contingent that they fall, by their unpredictability and unrepeatability, into the domain of “what we may call chance.”)
But where shall we place the boundary between lawlike events and contingent details? Darwin presses Gray further. If God be just, Darwin holds, you could not claim that the improbable death of a man by lightning or the birth of a child with serious mental handicaps represents the general and inevitable way of our world (even though both events have demonstrable physical causes). And if you accept “what we may call chance” (the presence of this man under that tree at that moment) as an explanation for a death, then why not for a birth? And if for the birth of an individual, why not for the origin of a species? And if for the origin of a species, then why not for the evolution of Homo sapiens as well?
You can see where Darwin’s chain of argument is leading: Human intelligence itself—the transcendent item that, above all else, supposedly reflected God’s benevolence, the rule of law, and the necessary progress of history—might be a detail, and not the predictable outcome of first principles. I wouldn’t push this argument to an absurd extreme. Consciousness in some form might lie in the realm of predictability, or at least reasonable probability. But we care about details. Consciousness in human form—by means of a brain plagued with inherent paths of illogic, and weighted down by odd and dysfunctional inheritances, in a body with two eyes, two legs, and a fleshy upper thigh—is a detail of history, an outcome of a million improbable events, never destined to repe
at. We care about George Canning’s sore behind because we sense, in the cascade of consequences, an analogy to our own tenuous existence. We revel in the details of history because they are the source of our being.
2 | Grimm’s Greatest Tale
WITH THE POSSIBLE EXCEPTION of Eng and Chang, who had no choice, no famous brothers have ever been closer than Wilhelm and Jacob Grimm, who lived and worked together throughout their long and productive lives. Wilhelm (1786–1859) was the prime mover in collecting the Kinder-und Hausmärchen (fables for the home and for children) that have become a pillar and icon of our culture. (Can you even imagine a world without Rapunzel or Snow White?) Jacob, senior member of the partnership (1785–1863), maintained a primary interest in linguistics and the history of human speech. His Deutsche Grammatik, first published in 1819, became a cornerstone for documenting relationships among Indo-European languages. Late in their lives, after a principled resignation from the University of Göttingen (prompted by the king of Hanover’s repeal of the 1833 constitution as too liberal), the brothers Grimm settled in Berlin where they began their last and greatest project, the Deutsches Wörterbuch—a gigantic German dictionary documenting the history, etymology, and use of every word contained in three centuries of literature from Luther to Goethe. Certain scholarly projects are, like medieval cathedrals, too vast for completion in the lifetimes of their architects. Wilhelm never got past D; Jacob lived to see the letter F.
Speaking in Calcutta, during the infancy of the British raj in 1786, the philologist William Jones first noted impressive similarities between Sanskrit and the classical languages of Greece and Rome (an Indian king, or raja, matches rex, his Latin counterpart). Jones’s observation led to the recognition of a great Indo-European family of languages, now spread from the British Isles and Scandinavia to India, but clearly rooted in a single, ancient origin. Jones may have marked the basic similarity, but the brothers Grimm were among the first to codify regularities of change that underpin the diversification of the rootstock into its major subgroups (Romance languages, Germanic tongues, and so on). Grimm’s law, you see, does not state that all frogs shall turn into princes by the story’s end, but specifies the characteristic changes in consonants between Proto-Indo-European (as retained in Latin) and the Germanic languages. Thus, for example, Latin p’s become f’s in Germanic cognates (voiceless stops become voiceless fricatives in the jargon). The Latin plnum becomes “full” (voll, pronounced “foll” in German); piscis becomes “fish” (Fisch in German); and ps becomes “foot” (Fuss in German). (Since English is an amalgam of a Germanic stock with Latin-based imports from the Norman conquest, our language has added Latin cognates to Anglo-Saxon roots altered according to Grimm’s law—plenty, piscine, and podiatry. We can even get both for the price of one in plentiful.)
I first learned about Grimm’s law in a college course more than twenty-five years ago. Somehow, the idea that the compilers of Rapunzel and Rumpelstiltskin also gave the world a great scholarly principle in linguistics struck me as one of the sweetest little facts I ever learned—a statement, symbolic at least, about interdisciplinary study and the proper contact of high and vernacular culture. I have wanted to disgorge this tidbit for years and am delighted that this essay finally provided an opportunity.
A great dream of unification underlay the observations of Jones and the codification of systematic changes by Jacob Grimm. Nearly all the languages of Europe (with such fascinating exceptions as Basque, Hungarian, and Finnish) could be joined to a pathway that spread through Persia all the way to India via Sanskrit and its derivatives. An origin in the middle, somewhere in the Near East, seemed indicated, and such “fossil” Indo-European tongues as Hittite support this interpretation. Whether the languages were spread, as convention dictates, by conquering nomadic tribes on horseback or, as Colin Renfrew argues in his recent book (Archaeology and Language, 1987), more gently and passively by the advantages of agriculture, evidence points to a single source with a complex history of proliferation in many directions.
Might we extend the vision of unity even further? Could we link Indo-European with the Semitic (Hebrew, Arabic) languages of the so-called Afro-Asiatic stock; the Altaic languages of Tibet, Mongolia, Korea, and Japan; the Dravidian tongues of southern India; even to the native Amerindian languages of the New World? Could the linkages extend even further to the languages of southeastern Asia (Chinese, Thai, Malay, Tagalog), the Pacific Islands, Australia, and New Guinea, even (dare one dream) to the most different tongues of southern Africa, including the Khoisan family with its complex clicks and implosions?
Most scholars balk at the very thought of direct evidence for connections among these basic “linguistic phyla.” The peoples were once united, of course, but the division and spread occurred so long ago (or so the usual argument goes) that no traces of linguistic similarity should be left according to standard views about rates of change in such volatile aspects of human culture. Yet a small group of scholars, including some prominent émigrés from the Soviet Union (where theories of linguistic unification are not so scorned), persists in arguing for such linkages, despite acrimonious rebuttal and dismissal from most Western colleagues. One heterodox view tries to link Indo-European with linguistic phyla of the Near East and northern Asia (from Semitic at the southwest, to Dravidian at the southeast, all the way to Japanese at the northeast) by reconstructing a hypothetical ancestral tongue called Nostratic (from the Latin noster, meaning “our”). An even more radical view holds that modern tongues still preserve enough traces of common ancestry to link Nostratic with the native languages of the Americas (all the way to South America via the Eskimo tongues, but excluding the puzzling Na-Dene languages of northwestern America).
The vision is beguiling, but I haven’t the slightest idea whether any of these unorthodox notions has a prayer of success. I have no technical knowledge of linguistics, only a hobbyist’s interest in language. But I can report, from my own evolutionary domain, that the usual biological argument, invoked a priori against the possibility of direct linkage among linguistic phyla, no longer applies. This conventional argument held that Homo sapiens arose and split (by geographical migration) into its racial lines far too long ago for any hope that ancestral linguistic similarities might be retained by modern speakers. (A stronger version held that various races of Homo sapiens arose separately and in parallel from different stocks of Homo erectus, thus putting the point of common linguistic ancestry even further back into a truly inaccessible past. Indeed, according to this view, the distant common ancestor of all modern people might not even have possessed language. Some linguistic phyla might have arisen as separate evolutionary inventions, scotching any hope for theories of unification.)
The latest biological evidence, mostly genetic but with some contribution from paleontology, strongly indicates a single and discrete African origin for Homo sapiens at a date much closer to the present than standard views would have dared to imagine—perhaps only 200,000 years ago or so, with all non-African diversity perhaps no more than 100,000 years old. Within this highly compressed framework of common ancestry, the notion that conservative linguistic elements might still link existing phyla no longer seems so absurd a priori. The idea is worth some serious testing, even if absolutely nothing positive eventually emerges.
This compression of the time scale also suggests possible success for a potentially powerful research program into the great question of historical linkages among modern peoples. Three major and entirely independent sources of evidence might be used to reconstruct the human family tree: (1) direct but limited evidence of fossil bones and artifacts by paleontology and archaeology; (2) indirect but copious data on degrees of genetic relationship among living peoples; (3) relative similarities and differences among languages, as discussed above. We might attempt to correlate these separate sources, searching for similarities in pattern. I am delighted to report some marked successes in this direction (“Reconstruction of Human Evolution: Bringing Togeth
er Genetic, Archaeological, and Linguistic Data,” by L. L. Cavalli-Sforza, A. Piazza, P. Menozzi, and J. Mountain, Proceedings of the National Academy of Sciencs, 1988). The reconstruction of the human family tree—its branching order, its timing, and its geography—may be within our grasp. Since this tree is the basic datum of history, hardly anything in intellectual life could be more important.
Our recently developed ability to measure genetic distances for large numbers of protein or DNA sequences provides the keystone for resolving the human family tree. As I have argued many times, such genetic data take pride of place not because genes are “better” or “more fundamental” than data of morphology, geography, and language, but only because genetic data are so copious and so comparable. We all shared a common origin, and therefore a common genetics and morphology, as a single ancestral population some quarter of a million years ago. Since then, differences have accumulated as populations separated and diversified. As a rough guide, the more extensive the measured differences, the greater the time of separation. This correlation between extent of difference and time of separation becomes our chief tool for reconstructing the human family tree.
But this relationship is only rough and very imperfect. So many factors can distort and disrupt a strict correlation of time and difference. Similar features can evolve independently—black skin in Africans and Australians, for example, since these groups stand as far apart genealogically as any two peoples on earth. Rates of change need not be constant. Tiny populations, in particular, can undergo marked increases in rate, primarily by random forces of genetic drift. The best way to work past these difficulties lies in a “brute force” approach: The greater the quantity of measured differences, the greater the likelihood of a primary correlation between time and overall distance. Any single measure of distance may be impacted by a large suite of forces that can disrupt the correlation of time and difference—natural selection, convergence, rapid genetic drift in small populations. But time is the only common factor underlying all measures of difference; when two populations split, all potential measures of distance become free to diverge. Thus, the more independent measures of distance we compile, the more likely we are to recover the only common signal of diversification: time itself. Only genetic data (at least for now) can supply this required richness in number of comparisons.
Bully for Brontosaurus Page 3