This Explains Everything

Home > Other > This Explains Everything > Page 18
This Explains Everything Page 18

by Mr. John Brockman


  I first learned of “warm-blooded” dinosaurs in my senior year of high school, via a blurb in Smithsonian Magazine about Robert Bakker’s article in Nature in the summer of 1972. As soon as I read it, it just clicked. I had been illustrating dinosaurs in accord with the reptilian consensus, but it was a bad fit, because dinosaurs are so obviously constructed like birds and mammals, not crocs and lizards. About the same time, John Ostrom, who also had a hand in discovering dinosaur endothermy, was presenting the evidence that birds are aerial versions of avepod dinosaurs—a concept so obvious that it should have become the dominant thesis back in the 1800s.

  For a quarter century, the hypotheses were highly controversial—the one regarding dinosaur metabolics especially so—and some of the first justifications were flawed. But the evidence has piled up. Growth rings in dinosaur bones show that they grew at a fast pace not achievable by reptiles. Their tracks showed that they walked at steady speeds too high for bradyaerobes. Many small dinosaurs were feathery. And polar dinosaurs, birds, and mammals were living through blizzardy Mesozoic winters that excluded ectotherms.

  Because of the dinorevolution, our understanding of the evolution of the animals that dominated the continents is far closer to the truth than it was. Energy-efficient amphibians and reptiles dominated the continents for only 70 million years in the later portion of the Paleozoic, the era that had begun with trilobites and nothing on land. For the last 270 million years, higher-powered albeit less energy-efficient tachyenergy has reigned supreme on land, starting with protomammalian therapsids near the end of the Paleozoic. When therapsids went belly-up early in the Mesozoic (the survivors of the group being the then-all-small mammals), they were not replaced by lower-powered dinosaurs for the next 150 million years but by dinosaurs that quickly took aerobic-exercise capacity to even greater levels.

  The unusual avian respiratory complex is so effective that some birds fly as high as airliners, but the system did not evolve for flight. That’s because the skeletal apparatus for operating air-sac-ventilated lungs first developed in flightless avepod dinosaurs for terrestrial purposes (some researchers, but by no means all, offer low global oxygen levels as the selective factor). So the basics of avian energetics appeared in predacious dinosaurs and only later were used to achieve powered flight. Rather like how internal combustion engines happened to make powered human flight practical, rather than having been developed to do so.

  COMPLEXITY OUT OF SIMPLICITY

  BRUCE HOOD

  Director of the Bristol Cognitive Development Centre, University of Bristol, UK; author, The Self Illusion: How the Social Brain Creates Identity

  As a scientist dealing with complex behavioral and cognitive processes, my deep and elegant explanation comes not from psychology (which is rarely elegant) but from the mathematics of physics. For my money, Fourier’s theorem has all the simplicity and yet more power than other familiar explanations in science. Stated simply, any complex pattern, whether in time or space, can be described as a series of overlapping sine waves of multiple frequencies and various amplitudes.

  I first encountered Fourier’s theorem when I was a PhD student in Cambridge working on visual development. There I met Fergus Campbell, who in the 1960s had demonstrated that not only was Fourier’s theorem an elegant way of analyzing complex visual patterns, but it was also biologically plausible. This insight was later to become a cornerstone of various computational models of vision. But why restrict the analysis to vision?

  In effect, any complex physical event can be reduced to the mathematical simplicity of sine waves. It doesn’t matter whether it is Van Gogh’s “Starry Night,” Mozart’s Requiem, Chanel’s No. 5, Rodin’s “Thinker,” or a Waldorf salad. Any complex pattern in the environment can be translated into neural patterns that, in turn, can be decomposed into the multitude of sine-wave activity arising from the output of populations of neurons.

  Maybe I have some physics envy, but to quote Lord Kelvin, “Fourier’s theorem . . . is not only one of the most beautiful results of modern analysis but may be said to furnish an indispensable instrument in the treatment of nearly every recondite question in modern physics.”* You don’t get much higher praise than that.

  RUSSELL’S THEORY OF DESCRIPTIONS

  A. C. GRAYLING

  Philosopher; master, New College of the Humanities, London; supernumerary fellow, St. Anne’s College, Oxford; author, The Good Book: A Humanist Bible

  My favorite example of an elegant and inspirational theory in philosophy is Bertrand Russell’s theory of descriptions. It did not prove definitive, but it prompted richly insightful trains of inquiry into the structure of language and thought.

  In essence, Russell’s theory turns on the idea that there is logical structure beneath the surface forms of language, which analysis brings to light; and when this structure is revealed we see what we are actually saying, what beliefs we are committing ourselves to, and what conditions have to be satisfied for the truth or falsity of what is thus said and believed.

  One example Russell used to illustrate the idea is the assertion that “the present king of France is bald,” said when there is no king of France. Is this assertion true or false? One response might be to say that it is neither, since there is no king of France at present. But Russell wished to find an explanation for the falsity of the assertion that did not dispense with bivalence in logic—that is, the exclusive alternative of truth and falsity as the only two truth-values.

  He postulated that the underlying form of the assertion consists in the conjunction of three logically more basic assertions: (a) there is something that has the property of being king of France, (b) there is only one such thing (this takes care of the implication of the definite article “the”), and (c) that thing has the further property of being bald. In the symbolism of first-order predicate calculus, which Russell took to be the properly unambiguous rendering of the assertion’s logical form (I omit strictly correct bracketing, so as not to clutter):

  which is pronounced “There is an x such that x is K; and for anything y, if y is K then y and x are identical”—this deals logically with “the,” which implies uniqueness—“and x is B,” where K stands for “has the property of being king of France” and B stands for “has the property of being bald.” “E” is the existential quantifier “there is . . .” or “there is at least one . . .” and “(y)” stands for the universal quantifier “for all” or “any.”

  One can now see that there are two ways in which the assertion can be false; one is if there is no x such that x is K, and the other is if there is an x but x is not bald. By preserving bivalence and stripping the assertion to its logical bones Russell has provided what Frank Ramsey wonderfully called “a paradigm of philosophy.”

  To the irredeemable skeptic about philosophy, all this doubtless looks like “drowning in two inches of water,” as the Lebanese say; but in fact it is in itself an exemplary instance of philosophical analysis, and it has been very fruitful as the ancestor of work in a wide range of fields, from the contributions of Wittgenstein and W. V. Quine to research in philosophy of language, linguistics, psychology, cognitive science, computing, and artificial intelligence.

  FEYNMAN’S LIFEGUARD

  TIMO HANNAY

  Managing director, Digital Science, Macmillan Publishers Ltd.; former publisher, Nature.com; co-organizer, SciFoo

  I would like to propose not only a particular explanation but also a particular exposition and exponent: Richard Feynman’s lectures on quantum electrodynamics (QED) delivered at the University of Auckland in 1979. These are surely among the very best ever delivered in the history of science.

  For a start, the theory is genuinely profound, having to do with the behavior and interactions of those (apparently) most fundamental of particles, photons and electrons. And yet it explains a huge range of phenomena, from the reflection, refraction, and diffraction of light to the structure and behavior of electrons in atoms and their resultant chemistry. Feynm
an may have been exaggerating when he claimed that QED explains all of the phenomena in the world “except for radioactivity and gravity,” but only slightly.

  Let me give a brief example. Everyone knows that light travels in straight lines—except when it doesn’t, such as when it hits glass or water at anything other than a right angle. Why? Feynman explains that light always takes the path of least time from point to point, and he uses the analogy of a lifeguard racing along a beach to save a drowning swimmer. (This being Feynman, the latter is, of course, a beautiful girl.) The lifeguard could run straight to the water’s edge and then swim diagonally along the coast and out to sea, but this would result in a long time spent swimming, which is slower than running on the beach. Alternatively, he could run to the water’s edge at the point nearest to the swimmer, and dive in there. But this makes the total distance covered longer than it needs to be. The optimum, if his aim is to reach the girl as quickly as possible, is somewhere in between these two extremes. Light, too, takes such a path of least time from point to point, which is why it bends when passing between different materials.

  He goes on to reveal that this is actually an incomplete view. Using the so-called path integral formulation (although he avoids that ugly term), Feynman explains that light actually takes every conceivable path from one point to another but most of these cancel each other out, and the net result is that it appears to follow only the single path of least time. This also happens to explain why uninterrupted light (along with everything else) travels in straight lines—so fundamental a phenomenon that surely very few people even consider it to be in need of an explanation. While at first sight such a theory may seem preposterously profligate, it achieves the welcome result of minimizing that most scientifically unsatisfactory of all attributes, arbitrariness.

  My amateurish attempts at compressing and conveying this explanation have perhaps made it sound arcane. But on the contrary, a second reason to marvel is that it is almost unbelievably simple and intuitive. Even I, an innumerate former biologist, came away not merely with a vague appreciation that some experts somewhere had found something novel but with the conviction that I was able to share directly in this new conception of reality. Such an experience is all too rare in science generally, but in the abstract, abstruse world of quantum physics it is all but unknown. The main reason for this perspicacity was the adoption of a visual grammar (those famous Feynman diagrams) and an almost complete eschewal of hardcore mathematics (the fact that the spinning vectors central to the theory actually represent complex numbers seems almost incidental). Though the world it introduces is as unfamiliar as can be, it makes complete sense on its own bizarre terms.

  THE LIMITS OF INTUITION

  BRIAN ENO

  Artist; composer; musician; recording producer, U2, Coldplay, Talking Heads, Paul Simon

  We sometimes tend to think that ideas and feelings arising from our intuition are intrinsically superior to those achieved by reason and logic. Intuition—the “gut”—becomes deified as the Noble Savage of the mind, fearlessly cutting through the pedantry of reason. Artists, working from intuition much of the time, are especially prone to this belief. A couple of experiences have made me skeptical.

  The first is a question that Wittgenstein used to pose to his students. It goes like this: You have a ribbon, which you want to tie around the center of the Earth (let’s assume Earth to be a perfect sphere). Unfortunately, you’ve tied the ribbon a bit too loose; it’s a meter too long. The question is this: If you could distribute the resulting slack—the extra meter—evenly around the planet so the ribbon hovered just above the surface, how far above the surface would it be?

  Most people’s intuitions lead them to an answer in the region of a minute fraction of a millimeter. The actual answer is almost 16 centimeters. In my experience, only two sorts of people intuitively get close to this: mathematicians and dressmakers. I still find it rather astonishing. In fact, when I heard it as an art student, I spent most of one evening calculating and recalculating it, because my intuition was screaming incredulity.

  Not many years later, at the Exploratorium in San Francisco, I had another shock-to-the-intuition. I saw for the first time a computer demonstration of John Conway’s Life. For those of you who don’t know it, it’s a simple grid with dots that are acted on according to an equally simple and totally deterministic set of rules. The rules decide which dots will live, die, or be born in the next step. There are no tricks, no creative stuff, just the rules. The whole system is so transparent that there should be no surprises at all, but in fact there are plenty: The complexity and “organic-ness” of the evolution of the dot patterns completely beggars prediction. You change the position of one dot at the start and the whole story turns out wildly differently. You tweak one of the rules a tiny bit and there’s an explosion of growth or instant Armageddon. You just have no (intuitive) way of guessing which it’s going to be.

  These two examples elegantly demonstrate the following to me: (a) “Deterministic” doesn’t mean “predictable”; (b) we aren’t good at intuiting the interaction of simple rules with initial conditions (and the bigger point here is that the human brain may be intrinsically limited in its ability to intuit certain things—like quantum physics and probability, for example); and (c) intuition is not a quasi-mystical voice from outside ourselves speaking through us but a sort of quick-and-dirty processing of our prior experience (which is why dressmakers get it when the rest of us don’t). That processing tool sometimes produces impressive results at astonishing speed, but it’s worth reminding ourselves now and again that it can also be totally wrong.

  THE HIGGS MECHANISM

  LISA RANDALL

  Physicist, Harvard University; author, Knocking on Heaven’s Door: How Physics and Scientific Thinking Illuminate the Universe and the Modern World

  The beauty of science—in the long run—is its lack of subjectivity. So answering the question “What is your favorite deep, beautiful, or elegant explanation” can be disturbing to a scientist, since the only objective words in the question are “what,” “is,” “or,” and (in an ideal scientific world) “explanation.” Beauty and elegance do play a part in science but are not the arbiters of truth. But I will admit that simplicity, which is often confused with elegance, can be a useful guide to maximizing explanatory power.

  As for the question, I’ll stick to an explanation that I think is extremely nice and relatively simple (though subtle), and which might even be verified within the year. That is the Higgs mechanism, named after the physicist Peter Higgs, who developed it. The Higgs mechanism is probably responsible for the masses of elementary particles, such as the electron. If the electron had zero mass (like the photon), it wouldn’t be bound into atoms and none of the structure of our universe would be present.

  In any case, experiments have measured the masses of elementary particles and they don’t vanish. We know they exist. The problem is that these masses violate the underlying symmetry structure we know to be present in the physical description of these particles. More concretely, if elementary particles had mass from the get-go, the theory would make ridiculous predictions about very energetic particles; for example, it would predict interaction probabilities greater than one.

  So here is a significant puzzle. How can particles have masses that have physical consequences and can be measured at low energies but act as if they don’t have masses at high energies, when predictions would become nonsensical? That is what the Higgs mechanism tells us. We don’t yet know for certain that it is indeed responsible for the origin of elementary particle masses, but no one has found an alternative satisfactory explanation.

  One way to understand the Higgs mechanism is in terms of what is known as spontaneous symmetry-breaking, which I’d say is itself a beautiful idea. A spontaneously broken symmetry is broken by the actual state of nature, but not by the physical laws. For example, if you sit at a dinner table and use the glass on your right, so will everyone else. The dinne
r table is symmetrical—you have a glass on your right and also your left. Yet everyone chooses the glass on the right and thereby spontaneously breaks the left-right symmetry that would otherwise be present.

  Nature does something similar. The physical laws describing an object called a Higgs field respect the symmetry of nature. Yet the actual state of the Higgs field breaks the symmetry. At low energy, it takes a particular value. This non-vanishing Higgs field is somewhat akin to a charge spread throughout the vacuum (the state of the universe with no actual particles). Particles acquire their masses by interacting with these “charges.” Because this value appears only at low energies, particles effectively have masses only at these energies, and the apparent bottleneck to elementary particle masses is apparently resolved.

  Keep in mind that the Standard Model of Particle Physics has worked extremely well, even though we do not yet know whether the Higgs mechanism is correct. We don’t need to know about the Higgs mechanism to know that particles have masses and to make many successful predictions with the Standard Model. But the Higgs mechanism is essential to explaining how those masses can arise in a sensible theory. The Standard Model’s success nonetheless illustrates another beautiful idea essential to all of physics, which is the concept of an “effective theory.” The idea is simply that you can focus on measurable quantities when making predictions and leave understanding the source of those quantities to later research when you have better precision.

 

‹ Prev