This Explains Everything
Page 15
Short of imagining that the nerve cord glommed upward and took over the gut and a new gut spontaneously developed down below because it was “needed”—this idea was actually entertained for a while by one venturesome thinker—the best that biologists could do for a long time was to suppose that the arthropod plan and the chordate plan were alternative pathways of evolution from some primordial creature; just a matter of the roll of the dice, they thought.
Not only was this explanation boring, the problem was also that molecular biology was making it ever clearer that arthropods and chordates trace back to the same basic body plan in a good amount of detail. The shrimp’s little segments are generated by the same genes that create our vertebral column, and so on. Which leads back to the old question: How do you get from a lobster to a cat? Biologists are converging upon an answer that combines elegance with a touch of mystery, with a scintilla of humility in the bargain.
What’s increasingly thought to have happened is that some early wormlike aquatic creature with the arthropod-style body plan started swimming upside down. Creatures can do that—brine shrimp today, for example. Often it’s because a creature’s coloring is different on top than on the bottom, and having the top color down makes them harder for predators to see. So there would have been evolutionary advantage to such a creature turning upside down forever. In this creature, the spinal cord was up and the guts were down. By itself, this story is perhaps cute, maybe a little sad, but not much more. But suppose this little worm then evolved into today’s chordates? It’s hardly a stretch, given that the most primitive chordates actually are wormish, only vaguely piscine things called lancelets. And if you were moved to rip one open, you’d see that nerve cord on the back, not the front.
Molecular biology is quickly showing exactly how developing organisms can be signaled either to develop a shrimplike or a catlike body plan along these lines. There even seems to be a missing link—there are rather vile, smelly, bottom-feeding critters called acorn worms that have nerve cords on the back and the front and guts that seem on their way to moving on down.
So the reason we humans have a backbone is not because it’s somehow better to have a spinal column to break a fall backward, or anything of the sort. Roll the dice again and we could be bipeds with spinal columns running down our fronts like zippers and the guts carried in the back (this actually doesn’t sound half bad). This explanation of what’s called dorsoventral inversion is yet more evidence of how, under natural selection, such awesome variety can emerge in unbroken fashion from such humble beginnings. And finally it’s hard not to be heartened by a scientific explanation that early adopters, like Geoffroy Saint-Hilaire, were ridiculed for espousing.
Quite often when I’m preparing shrimp, or tearing open a lobster, or contemplating what it would be like to be forced to dissect an acorn worm, or patting my cat on the belly, or giving someone a hug, I think a bit about the fact that all these bodies are built on the same plan, except that the cat’s and the huggee’s bodies are the legacy of some worm swimming the wrong way up in a Precambrian ocean more than 550 million years ago. It has always struck me as rather gorgeous.
GERMS CAUSE DISEASE
GREGORY COCHRAN
Consultant, Adaptive Optics; adjunct professor of anthropology, University of Utah; coauthor (with Henry Harpending), The 10,000-Year Explosion: How Civilization Accelerated Human Evolution
The germ theory of disease has been very successful, particularly if you care about practical payoffs like staying alive. It explains how disease can rapidly spread to large numbers of people (exponential growth), why there are so many different diseases (distinct pathogen species), and why some kind of contact (sometimes indirect) is required for disease transmission. In modern language, most disease syndromes turn out to be caused by tiny self-replicating machines whose genetic interests are not closely aligned with ours.
In fact, germ theory has been so successful that it almost seems uninteresting. Once we understood the causes of cholera and pneumonia and syphilis, we got rid of them, at least in the wealthier countries. Now we’re at the point where people resist the means of victory—vaccination, for example—because they no longer remember the threat.
It’s still worth studying—not just to fight the next plague but also because it has been a major factor in human history and human evolution. You can’t really understand Cortez without smallpox or Keats without tuberculosis. The past is another country—don’t drink the water.
It may well explain patterns we aren’t even supposed to see, let alone understand. For example, human intelligence was, until recently, ineffective at addressing problems caused by microparasites, as William McNeill has pointed out in Plagues and Peoples. Those invisible enemies played a major role in determining human biological fitness—more so in some places than others. Consider the implications.
Lastly, when you leaf through an illustrated book on tropical diseases and gaze upon an advanced case of elephantiasis or crusted scabies, you realize that any theory that explains that much ugliness just has to be true.
DIRT IS MATTER OUT OF PLACE
CHRISTINE FINN
Archaeologist, journalist; author, Artifacts: An Archaeologist’s Year in Silicon Valley
I admire this explanation of cultural relativity, by the anthropologist Mary Douglas, for its clean lines and tidiness. I like its beautiful simplicity, the way it illuminates dark corners of misreading, how it highlights the counterconventional. Poking about in the dirt is exciting, and irreverent. It’s about taking what’s out of bounds and making it relevant. Douglas’s explanation of “dirt” makes us question the very boundaries we’re pushing.
INFORMATION IS THE RESOLUTION OF UNCERTAINTY
ANDREW LIH
Associate professor of journalism, University of Southern California; author, The Wikipedia Revolution: How a Bunch of Nobodies Created the World’s Greatest Encyclopedia
Nearly everything we enjoy in the digital age hinges on this one idea, yet few people are aware of its originator or the foundations of this simple, elegant theory of information. How many know that the information age was not the creation of Bill Gates or Steve Jobs but of Claude Shannon in 1948? Shannon was a humble man and an intellectual wanderer who shunned public speaking and granting interviews. This brilliant mathematician, geneticist, and cryptanalyst formulated what would become information theory in the aftermath of World War II, when it was apparent that the war had not been just a war of steel and bullets.
If World War I was the first mechanized war, the Second World War could be considered the first struggle based around communication technologies. Unlike previous conflicts, there was heavy utilization of radio communication among military forces. This rapid remote coordination pushed the war to all corners of the globe. The field of cryptography advanced quickly, in order to keep messages secret and hidden from adversaries. Also, for the first time in combat, radar was used to detect and track aircraft, thereby surpassing conventional visual capabilities that ended on the horizon.
Claude Shannon was working on the problem of antiaircraft targeting and designing fire-control systems to work directly with radar. How could you determine the current and future position of enemy aircraft so that you could properly time artillery fire to shoot them down? The radar information about plane position was a breakthrough, but “noisy,” in that it provided an approximation of location but not precisely enough to be immediately useful. After the war, this inspired Shannon and many others to think about the nature of filtering and propagating information, whether radar signals, voices (on phone calls), or video (for television). Noise was the enemy of communication, so any way to store and transmit information that rejected noise was of particular interest to Shannon’s employer, Bell Laboratories, the research arm of the mid-century American telephone monopoly.
Shannon considered communication the most mathematical of the engineering sciences and turned his intellect toward this problem. Having worked on the intricac
ies of Vannevar Bush’s differential-analyzer analog computer in his early days at MIT, and with a mathematics-heavy PhD thesis (“An Algebra for Theoretical Genetics”), Shannon was particularly well suited to understanding the fundamentals of handling information using knowledge from a variety of disciplines. By 1948 he had formed his central, simple, and powerful thesis: Information is the resolution of uncertainty.
As long as something can be relayed that resolves uncertainty, that is the fundamental nature of information. While this sounds obvious, it was an important point, given the different languages people speak and how one utterance can be meaningful to one person and unintelligible to another. Until Shannon’s theory was formulated, it was not known how to compensate for these types of “psychological factors” appropriately. Shannon built on the work of fellow researchers Ralph Hartley and Harry Nyquist to show that coding and symbols were the key to resolving whether two communicators had a common understanding of the uncertainty being resolved.
Shannon then asked, “What is the simplest resolution of uncertainty?” To him it was the flip of the coin—heads or tails, yes or no: an event with only two outcomes. He concluded that any type of information could be encoded as a series of yes or no answers. Today we know these answers as bits of digital information, 1s and 0s, which represent everything from e-mail text, digital photos, compact-disk music, or high-definition video. That any and all information could be represented and coded in discrete bits not just approximately but perfectly, without noise or error, was a breakthrough that astonished even his peers at academic institutions and Bell Laboratories, who had despaired of inventing a simple universal theory of information.
The compact disk, the first ubiquitous digital encoding system for the average consumer, brought Shannon’s legacy to the masses in 1982. It provides perfect reproduction of sound by dividing each second of musical audio waves into 44,100 slices (samples), and recording the height of each slice in digital numbers (quantization). Higher sampling rates and finer quantization raise the quality of the sound. Converting this digital stream back to audible analog sound using modern circuitry allowed for consistent high fidelity. Similar digital approaches have been used for images and video, so that today we enjoy a universe of MP3, DVDs, HDTV, and AVCHD multimedia files that can be stored, transmitted, and copied with no loss of quality.
Shannon became a professor at MIT, and over the years his students made many of the major breakthroughs of the information age, including digital modems, computer graphics, data compression, artificial intelligence, and digital wireless communication. Information theory as a novel and previously unimagined discovery has transformed nearly every aspect of our daily lives to digital, from how we work to how we live and socialize. Beautiful, elegant, and deeply powerful!
EVERYTHING IS THE WAY IT IS BECAUSE IT GOT THAT WAY
PZ MYERS
Associate professor of biology, University of Minnesota Morris; author, Atheist Voices of Minnesota: an Anthology of Personal Stories
There’s no denying that the central concept of modern biology is evolution, but I was a victim of the American public school system and I went through twelve years of education without once hearing any mention of the E word. We dissected cats, we memorized globs of taxonomy, we regurgitated extremely elementary fragments of biochemistry on exams, but we were not given any framework to make sense of it all. One reason I care very much about science education now is that mine was so poor.
The situation wasn’t much better in college. There, evolution was universally assumed, but there was no remedial introduction to the topic—it was sink or swim. Determined not to drown, I sought out context—anything that would help me understand all these facts my instructors expected me to know. I found it in a used bookstore, in a book that I selected because it wasn’t too thick and daunting and because when I skimmed it I could tell that it was clearly written, unlike the massive, dense, and opaque reference books my classes foisted on me. It was John Tyler Bonner’s On Development: The Biology of Form, and it blew my mind—and also warped me permanently so that I see biology through the lens of development.
The first thing the book taught me wasn’t an explanation, which was something of a relief; my classes were just full of explanations already. Bonner’s book is about questions—good questions, some of which had answers and others that just hung there ripely. For instance, how is biological form defined by genetics? It’s the implicit question in the title, but the book refined the questions we need to answer in order to explain the problem. Maybe that’s explanation at a different level: Science isn’t a body of archived facts; it’s the path we follow to acquire new knowledge.
Bonner also led me to D’Arcy Wentworth Thompson and his classic book, On Growth and Form, which provided my favorite aphorism for a scientific view of the universe, “Everything is the way it is because it got that way.” It’s a subtle way of emphasizing the importance of process and history in understanding why everything is the way it is. You simply cannot grasp the concepts of science if your approach is to dissect the details in a static snapshot of its current state; your only hope is to understand the underlying mechanisms that generate that state, and how it came to be. The necessity of that understanding is implicit in developmental biology, where all we do is study the process of change in the developing embryo, but I also found it essential as well in genetics, comparative physiology, anatomy, and biochemistry. And of course it is paramount in evolutionary biology.
So my most fundamental explanation is a mode of thinking: To understand how something works, you must first understand how it got that way.
THE IDEA OF EMERGENCE
DAVID CHRISTIAN
Professor of history at Macquarie University, Sydney; author, Maps of Time
One of the most beautiful and profound ideas I know, and one whose power is not widely enough appreciated, is the idea of emergence and emergent properties.
When created, our universe was pretty simple. For several hundred million years, there were no stars, hardly any atoms more complex than helium, and of course no planets, no living organisms, no people, no poetry.
Then, over 13.7 billion years, all these things appeared, one by one. Each had qualities that had never been seen before. This is creativity in its most basic and mysterious form. Galaxies and stars were the first large, complex objects, and they had strange new properties. Stars fused hydrogen atoms into helium atoms, creating vast amounts of energy and forming hot spots dotted throughout the universe. In their death throes, the largest stars created all the elements of the periodic table, while the energy they pumped into the cold space around them helped assemble these elements into utterly new forms of matter with entirely new properties. Now it was possible to form planets, bacteria, dinosaurs, and us.
Where did all these exotic new things come from? How do new things, new qualities “emerge”? Were they present in the components from which they were made? The simplest reductive arguments presume they had to be. But if so, they can be devilishly hard to find. Can you find “wateriness” in the atoms of hydrogen and oxygen that form water molecules? This is why “emergence” so often seems magical and mysterious.
But it’s not, really. One of the most beautiful explanations of emergence is found in a Buddhist sutra probably composed more than 10,000 years ago, “The Questions of Milinda.” (I’m paraphrasing on the basis of an online translation.)
Milinda is a great emperor. He was an actual historical figure, the Greco-Bactrian emperor Menander, who ruled a Central Asian kingdom founded by generals from Alexander the Great’s army. In the sutra, Milinda meets with Nagasena, a great Buddhist sage—probably on the plains of modern Afghanistan. Milinda had summoned Nagasena because he was getting interested in Buddhism but was puzzled because the Buddha seemed to deny the reality of the self. For most of us, the sense of self is the very bedrock of reality. (When Descartes said “I think, therefore I am,” he doubtless meant something like “The self is the only thing
we know that exists for certain.”)
So we should imagine Milinda sitting in a royal chariot, followed by a huge retinue of courtiers and soldiers, meeting Nagasena, with his retinue of Buddhist monks, for a great debate about the nature of the self, reality, and creativity. It’s a splendid vision.
Milinda asks Nagasena to explain the Buddha’s idea of the “self.” Nagasena asks, “Sire, how did you come here?” Milinda says, “In a chariot, of course, reverend Sire.”
“Sire, if you removed the wheels would it still be a chariot?”
“Yes, of course it would,” says Milinda, with some irritation, wondering where this conversation is going.
“And if you removed the framework, or the flagstaff, or the yoke, or the reins, or the goadstick, would it still be a chariot?”
Eventually Milinda starts to get it. He admits that at some point his chariot would no longer be a chariot, because it would have lost the quality of chariotness and could no longer do what chariots do.
And now Nagasena cannot resist gloating, because Milinda has failed to define in what exact sense his chariot really exists. Then comes the punch line: “Your Majesty has spoken well about the chariot. It is just so with me. . . . This denomination, ‘Nagasena,’ is a mere name. In ultimate reality, this person cannot be apprehended.”
Or, in modern language, I and all the complex things around me exist only because many things were precisely assembled. The “emergent” properties are not magical. They are really there, and eventually they may start rearranging the environments that generated them. But they don’t exist “in” the bits and pieces that made them; they emerge from the arrangement of those bits and pieces in very precise ways. And that is also true of the emergent entities known as “you” and “me.”