This Explains Everything
Page 17
Natural selection is the only known counterweight to the tendency of physical systems to lose rather than grow functional organization—the only natural physical process that pushes populations of organisms uphill (sometimes) into higher degrees of functional order. But how could this work, exactly?
It is here that, along with entropy and natural selection, the third of our trio of truly elegant scientific ideas can be adapted to the problem: Galileo’s brilliant concept of frames of reference, which he used to clarify the physics of motion.
The concept of entropy was originally developed for the study of heat and energy, and if the only kind of real entropy was the thermodynamic entropy of energy dispersal, then we (life) wouldn’t be possible. But with Galileo’s contribution, one can consider multiple kinds of order (improbable physical arrangements), each being defined with respect to a distinct frame of reference.
There can be as many kinds of entropy as there are meaningful frames of reference. Organisms are defined as self-replicating physical systems. This creates a frame of reference that defines its kind of order in terms of causal interrelationships that promote the replication of the system (replicative rather than thermodynamic order). Indeed, organisms must be physically designed to capture undispersed energy, and like hydroelectric dams using waterfalls to drive turbines, they use this thermodynamic entropic flow to fuel their replication, spreading multiple copies of themselves across the landscape.
Entropy sometimes introduces copying errors into replication, but injected disorder in replicative systems is self-correcting. By definition, the less well-organized are worse at replicating themselves and so are removed from the population. In contrast, copying errors that increase functional order (replicative ability) become more common. This inevitable ratchet effect in replicators is natural selection.
Organisms exploit the trick of deploying different entropic frames of reference in many diverse and subtle ways, but the underlying point is that what is naturally increasing disorder (moving toward maximally probable states) for one frame of reference inside one physical domain can be harnessed to decrease disorder with respect to another frame of reference. Natural selection picks out and links different entropic domains (e.g., cells, organs, membranes) that each impose their own proprietary entropic frames of reference locally. When the right ones are associated with each other, they do replicative work through harnessing various types of increasing entropy to decrease other kinds of entropy in ways useful for the organism. For example, oxygen diffusion from the lungs to the bloodstream to the cells is the entropy of chemical mixing—falling toward more probable high-entropy states but increasing order from the perspective of replication promotion.
Entropy makes things fall, but life ingeniously rigs the game so that when they do, they often fall into place.
WHY THINGS HAPPEN
PETER ATKINS
Emeritus professor of chemistry, University of Oxford; author, Reactions: The Private Life of Atoms
There is a wonderful simplicity in the view that events occur because things get worse. I have in mind the second law of thermodynamics and the fact that all natural change is accompanied by an increase in entropy. Although that is in my mind, I understand those words in terms of the tendency of matter and energy to disperse in disorder. Molecules of a gas undergo ceaseless virtually random motion and spread into the available volume. The chaotic thermal motion of atoms in a block of hot metal jostles their neighbors into motion, and as the energy spreads into the surroundings, so the block cools. All natural change is at root a manifestation of this simple process—that of dispersal in disorder.
The astonishing feature of this perception of natural change is that the dispersal can generate order: Through dispersal in disorder, structure can emerge. All it needs is a device that can link in to the dispersal, and just as a plunging stream of water can be harnessed and used to drive construction, so the stream of dispersal can be harnessed. Overall, there is an increase in disorder as the world progresses, but locally structures, including cathedrals and brains, dinosaurs and dogs, piety and evil deeds, poetry and diatribes, can be thrown up as local abatements of chaos.
Take, for instance, an internal combustion engine. The spark results in the combustion of the hydrocarbon fuel, with the generation of smaller water and carbon dioxide molecules that tend to disperse and in so doing drive down a piston. At the same time, the energy released in the combustion spreads into the surroundings. The mechanical design of the engine harnesses these dispersals, and, through a string of gears, that harnessing can be used to build from bricks a cathedral. Thus dispersal results in a local structure, even though, overall, the world has sunk a little more into disorder.
The fuel might be our dinner, which, as it is metabolized, releases molecules and energy, which spread. The analog of the gears in a vehicle is the network of biochemical reactions within us, and instead of a pile of bricks molded into a cathedral, amino acids are joined together to generate the intricate structure of a protein. Thus, as we eat, so we grow. We, too, are local abatements of chaos driven into being by the generation of disorder elsewhere.
Is it then too fanciful to imagine intellectual creativity, or just plain inconsequential reverie, as being driven likewise? At some kind of notional rest, the brain is a hive of electric and synaptic activity. The metabolic processes driven by the digestion of food can result in the ordering not of brick into cathedral, not of amino acid into protein, but current into concept, artistic work, foolhardy decision, scientific understanding.
Even that other great principle, natural selection, can be regarded as an extraordinarily complex reticulated unwinding of the world, with the changes that take place in the biosphere and its evolution driven ultimately by descent into disorder. Is it then any wonder that I regard the second law as a great enlightenment? That from so simple a principle great consequences flow is, in my view, a criterion of the greatness of a scientific principle. No principle, I think, can be simpler than that things get worse, and no consequences greater than the universe of all activity, so surely this law is the greatest of all.
WHY WE FEEL PRESSED FOR TIME
ELIZABETH DUNN
Social psychologist, University of British Columbia
Recently I found myself on the side of the road picking gravel out of my knee and wondering how I’d ended up there. I had been biking from work to meet a friend at the gym, pedaling frantically to make up for being a few minutes behind schedule. I knew I was going too fast, and when I hit a patch of loose gravel while careening through a turn, my bike slid out from under me. How had I gotten myself in this position? Why was I in such a rush?
I thought I knew the answer. The pace of life is increasing; people are working more and relaxing less than they did fifty years ago—at least that’s the impression one gets from the popular media. But as a social psychologist, I wanted to see the data. As it turns out, there is very little evidence that people nowadays are working more and relaxing less than they did in earlier decades. In fact, some of the best studies suggest just the opposite. So why do people report feeling so pressed for time?
A beautiful explanation for this puzzling phenomenon was recently offered by Sanford DeVoe of the University of Toronto and Jeffrey Pfeffer of Stanford. They argue that as time becomes worth more and more money, time is seen as scarcer. Scarcity and value are perceived as conjoined twins; when a resource—from diamonds to drinking water—is scarce, it is more valuable, and vice versa. So when our time becomes more valuable, we feel as though we had less of it. Surveys around the world have shown that people with higher incomes report feeling more pressed for time—though there are other plausible reasons for this, including the fact that the affluent often work longer hours, leaving them with less free time.
DeVoe and Pfeffer proposed, however, that simply perceiving oneself as affluent might be sufficient to generate feelings of time pressure. Going beyond past correlational analyses, they used controlled experiment
s to put this causal explanation to the test.* In one experiment, DeVoe and Pfeffer asked 128 undergraduates to report the total amount of money they had in the bank. All the students answered the question using an 11-point scale, but for half the students, the scale was divided into $50 increments, ranging from $0–$50 to over $500, whereas for the others the scale was divided into much larger increments, ranging from $0–$500 to over $400,000. Most undergraduates using the $50-increment scale circled a number near the top, leaving them with the sense that they were relatively well-off. And this seemingly trivial manipulation led them to feel that they were rushed, pressed for time, and stressed out. Just feeling affluent led students to experience the same sense of time pressure reported by genuinely affluent individuals. Using other methods, researchers have confirmed that increasing the perceived economic value of time increases its perceived scarcity.
If feelings of time-scarcity stem in part from the sense that time is highly valuable, then one of the best things we can do to reduce this sense of pressure may be to give our time away. Indeed, new research suggests that giving time away to help others can actually alleviate feelings of time pressure. Companies like Home Depot provide their employees with opportunities to volunteer their time to help others, potentially reducing feelings of time stress and burnout. And Google encourages employees to use 20 percent of their time on their own pet projects, whether or not these have payoff potential. Although some of them resulted in economically valuable products, like Gmail, the greatest value of this program might lie in reducing employees’ sense that their time is scarce.
DeVoe and Pfeffer’s work can help account for important cultural trends. Over the past fifty years, feelings of time pressure have risen dramatically in North America, despite the fact that weekly hours of work have stayed fairly level and weekly hours of leisure have climbed. This apparent paradox may be explained in no small part by the fact that incomes have increased substantially during the same period. This causal effect may also help to explain why people walk faster in wealthy cities like Tokyo and Toronto than they do in cities like Nairobi and Jakarta. And at the level of the individual, this explanation suggests that as incomes grow over the course of one’s life, time seems increasingly scarce. Which means that as my career develops, I might have to force myself to take those turns a little slower.
WHY THE SUN STILL SHINES
BART KOSKO
Professor of electrical engineering and law, University of Southern California; author, Noise
One of the deepest explanations has to be why the sun still shines—and thus why the sun has not long since burned out, as do the fires of everyday life. That had to worry some of the sun gazers of old as they watched campfires and forest fires burn through their life cycles. It worried the 19th-century scientists who knew that gravity alone could not account for the likely long life of the sun.
It sure worried me when I first thought about it as a child.
The explanation of hydrogen atoms fusing into helium was little comfort. It came at the height of the duck-and-cover cold-war paranoia in the early 1960s after my father had built part of the basement of our new house into a nuclear bomb shelter. The one-room shelter came complete with reinforced concrete and metal windows and a deep freezer packed with homemade TV dinners. The sun burned so long and so brightly because there were in effect so many mushroom-cloud-producing thermonuclear hydrogen-bomb explosions going off inside it and because there was so much hydrogen-bomb-making material in the sun. The explosions were just like the hydrogen bomb explosions that could scorch the Earth and even incinerate the little bomb shelter if they went off close enough.
The logic of the explanation went well beyond explaining the strategic equilibrium of a nuclear Mexican standoff on a global scale. The good news that the sun would not burn out anytime soon came with the bad news that the sun would certainly burn out in a few billion years. But first it would engulf the molten Earth in its red-giant phase.
The same explanation said further that in cosmic due course all the stars would burn out or blow up. There is no free lunch in the heat and light that results when simpler atoms fuse into slightly more complex atoms and when mass transforms into energy. There would not even be stars for long. The universe would go dark and get ever closer to absolute-zero cold. The result would be a faint white noise of sparse energy and matter. Even the black holes would, over eons, burn out or leak out into the near nothingness of an almost perfect faint white noise. That steady-state white noise will have effectively zero information content. It will be the last few steps in a staggeringly long sequence of irreversible nonlinear steps or processes that make up the evolution of the universe. So there will be no way to figure out the lives and worlds that preceded it, even if something arose that could figure.
The explanation of why the sun still shines is as deep as it gets. It explains doomsday.
BOSCOVICH’S EXPLANATION OF ATOMIC FORCES
CHARLES SIMONYI
Creator, WYSIWYG word processor; cofounder, Intentional Software; former director of application development and chief software architect, Microsoft Corporation
An example of how amazing insight can spring from simple considerations is the explanation of atomic forces by the 18th-century Jesuit polymath Roger Boscovich.
One of the great philosophical arguments at the time took place between the adherents of Descartes who—following Aristotle—thought that forces can only be the result of immediate contact, and those who followed Newton and believed in his concept of force acting at a distance. Newton was the revolutionary here, but his opponents argued—with some justification—that “action at a distance” brought back into physics “occult” explanations that do not follow from the clear understanding that Descartes demanded. Boscovich, a forceful advocate of the Newtonian point of view, turned the question around: Let’s understand exactly what happens during the interaction that we would call “immediate contact.”
His arguments are easy to understand and extremely convincing. Let’s imagine two bodies, one traveling at a speed of, say, 6 units, the other at a speed of 12, with the faster body catching up with the slower along the same straight path. When the two bodies collide, by conservation of the quantity of motion, both should continue after collision along the same path, each with a speed of 9 units in the case of inelastic collision (or, in case of elastic collision, for a brief period right after the collision).
But how did the velocity of the faster body come to be reduced from 12 to 9 and that of the slower body increased from 6 to 9? Clearly, the time interval for the change in velocities cannot be zero, for then, argued Boscovich, the instantaneous change in speed would violate the law of continuity. Furthermore, we would have to say that at the moment of impact, the speed of one body is simultaneously 12 and 9, which is patently absurd.
It is therefore necessary for the change in speed to take place in a small yet finite amount of time. But with this assumption, we arrive at another contradiction. Suppose, for example, that after a small interval of time the speed of the faster body is 11 and that of the slower body is 7. But this would mean that they aren’t moving at the same velocity, and the front surface of the faster body would advance through the rear surface of the slower body, which is impossible because we have assumed that the bodies are impenetrable. It therefore becomes apparent that the interaction must take place immediately before the impact of the two bodies and that this interaction can only be a repulsive one, because it is expressed in the slowing down of one body and the speeding up of the other.
Moreover, this argument is valid for arbitrary speeds, so one can no longer speak of definite dimensions for the particles (namely, the atoms) that were until now thought of as impenetrable. An atom should be viewed, rather, as a point source of force, with the force emanating from it acting in some complicated fashion that depends on distance.
According to Boscovich, when bodies are far apart, they act on each other through a force corresponding to th
e gravitational force, which is inversely proportional to the square of the distance. But with decreasing distance, this law must be modified, because, in accordance with the above considerations, the force changes sign and must become a repulsive force. Boscovich even plotted fanciful traces of how the force should vary with distance in which the force changed sign several times, hinting at the existence of minima in the potential and the existence of stable bonds between the particles, or atoms.
With this idea, Boscovich not only offered a new picture for interactions in place of the Aristotelian-Cartesian theory based on immediate contact, but also presaged our understanding of the structure of matter, especially that of solid bodies.
BIRDS ARE THE DIRECT DESCENDANTS OF DINOSAURS
GREGORY S. PAUL
Independent researcher; author, The Princeton Field Guide to Dinosaurs
The most graceful example of an elegant scientific idea in one of my fields of expertise is the idea that dinosaurs were tachyenergetic—that they were endotherms with the high internal-energy production and high aerobic-exercise capacity typical of birds and mammals that can sustain long periods of intense activity. Although not dependent on it, the high-powered-dinosaur idea meshes with the hypothesis that birds are the direct descendants of dinosaurs—that birds are flying dinosaurs, much as bats are flying mammals.
The sense that the tachyenergetic idea makes cannot be overemphasized, nor can the degree to which it revolutionized a big chunk of our understanding of evolution and 230 million years of Earth history, relative to what was thought from the mid-1800s to the 1960s. Until then, it was generally presumed that dinosaurs were a dead-end collection of bradyenergetic reptiles that could achieve high levels of activity for only brief bursts; even walking at a rate of 5 mph requires high respiratory capacity, beyond that of reptiles, who must plod along at a mile per hour or so if they are moving over a long distance. Birds were seen as a distinct and feathery group in which energy inefficiency evolved in order to power flight. Although the latter hypothesis was not inherently illogical, it was divergent from the evolution of bats, whose high aerobic capacity was already present in their furry ancestors.