This Explains Everything

Home > Other > This Explains Everything > Page 29
This Explains Everything Page 29

by Mr. John Brockman


  From these two fundamental properties of nature, Maxwell calculated the speed of the disturbance and found out that the speed was precisely the speed that light was measured to have! Thus he discovered that light is indeed a wave—but a wave of electric and magnetic fields that moves through space at a precise speed determined by two fundamental constants in nature. This laid the basis for Einstein to come along a generation or so later and demonstrate that the constant speed of light required a revision in our notions of space and time.

  So, from jumping frogs and differential equations came one of the most beautiful unifications in all of physics: the unification of electricity and magnetism in a single theory of electromagnetism. Maxwell’s theory explained the existence of that which allows us to observe the universe around us—namely, light. Its practical implications would produce the mechanisms that power modern civilization and the principles that govern essentially all modern electronic devices. And the nature of the theory itself produced a series of further puzzles that allowed Einstein to come up with new insights into space and time!

  Not bad for a set of experiments whose worth was questioned by Gladstone (or by Queen Victoria, depending on which apocryphal story you buy), who came into Faraday’s laboratory and wondered what all the fuss was about and what use all of this experimentation was. He (or she) was told either “Of what use is a newborn baby?” or, in my favorite version of the story, “Use? Why, one day this will be so useful you will tax us for it!” Beauty, elegance, depth, utility, adventure, and excitement! Science at its best!

  FURRY RUBBER BANDS

  NEIL GERSHENFELD

  Director, Center for Bits and Atoms, MIT; author, Fab: The Coming Revolution on Your Desktop—from Personal Computers to Personal Fabrication

  I learned electrodynamics at Swarthmore, from Professor Mark Heald and his concise text on an even more concise set of equations, Maxwell’s. In four lines, just thirty-one characters (or fewer, with some notational tricks), Maxwell’s equations unified what had appeared to be unrelated phenomena (the dynamics of electric and magnetic fields), predicted new experimental observations, and contained both theoretical advances to come (including the wave solution for light and special relativity) and technologies to come (including the fiberoptics, coaxial cables, and wireless signals that carry the Internet).

  But the explanation I found memorable was not Maxwell’s of electromagnetism, which is well known for its beauty and consequence. It was Heald’s explanation that electric field lines behave like furry rubber bands: They want to be as short as possible (the rubber) but don’t want to be near each other (the fur). This is an easily understood, qualitative description that has served me in good stead in device design. And it provides a deeper quantitative insight into the nature of Maxwell’s equations: The local solution for the field geometry can be understood as solving a global optimization.

  These sorts of scientific similarities that are predictive as well as descriptive help us reason about regimes our minds didn’t evolve to operate in. Unifying forces is not an everyday occurrence, but explaining them can be. Recognizing that something is precisely like something else is a kind of object-oriented thinking that helps build bigger thoughts out of smaller ideas.

  I understood Berry’s phase for spinors by trying to rotate my hand while holding up a glass; I mastered NMR spin echoes by swinging my arms while I revolved; the alignment of semiconductor Fermi levels at a junction made sense when explained as filling buckets with water. Like furry rubber bands and electric fields, these relationships represent analogies between governing equations. Unlike words, they can be exact, providing explanations that connect unfamiliar formalism with familiar experience.

  THE PRINCIPLE OF INERTIA

  LEE SMOLIN

  Physicist, Perimeter Institute; author, The Trouble with Physics, The Life of the Cosmos, and Three Roads to Quantum Gravity

  My favorite explanation in science is the principle of inertia. It explains why we can’t feel the Earth in motion. This principle was perhaps the most counterintuitive and revolutionary step taken in all of science. It was proposed by both Galileo and Descartes and has been the core of countless successful explanations in physics in the centuries since. The principle is the answer to a very simple question: How would an object that is free (in the sense that no external influences or force affects its motion) move?

  To answer this question, we need a definition of motion. What does it mean for something to move? The modern conception is that motion has to be described relative to an observer.

  Consider an object at rest relative to you—say, a cat sleeping on your lap—and imagine how it appears to move as seen by other observers. Depending on how the observer is moving, the cat can appear to have any sort of motion at all. If the observer spins relative to you, the cat will appear to spin to that observer. So to make sense of the question of how free objects move, we have to refer to a special class of observers. The answer to the question is the following:

  There is a special class of observers, relative to whom all free objects appear either to be at rest or to move in a straight line at a constant speed.

  I have just stated the principle of inertia.

  The power of this principle is that it is completely general. Once a special observer sees a free object move in a straight line with constant speed, she will observe all other free objects to so move.

  Furthermore, suppose you’re a special observer. Any observer who moves in a straight line at a constant speed with respect to you will also see the free objects move at a constant speed in a straight line with respect to him. Special observers form a big class, one whose members are all moving with respect to one another. These special observers are called inertial observers.

  An immediate and momentous consequence is that there is no absolute meaning to not moving. An object may be at rest with respect to one inertial observer, but other inertial observers will see it moving—always in a straight line at constant speed. This can be formulated as a principle:

  There is no way, by observing objects in motion, to distinguish observers at rest from other inertial observers.

  Any inertial observer can plausibly say that he is the one at rest and the others are moving. This is called Galileo’s principle of relativity. It explains why the Earth can move without our experiencing the gross effects.

  To appreciate how revolutionary this principle was, notice that physicists of the 16th century could disprove, by a simple observation, Copernicus’s claim that the Earth moves around the sun: Just drop a ball from the top of a tower. If the Earth was rotating around its axis and orbiting the sun at the speeds Copernicus required, the ball would land far from the tower, instead of at its base. QED: The Earth is at rest.

  But this proof assumes that motion is absolute, defined with respect to a special observer at rest, with respect to whom objects with no forces on them come to rest. By altering the definition of motion, Galileo could argue that this same experiment shows that the Earth might indeed be moving.

  The principle of inertia was the core of the Scientific Revolution of the 17th century; moreover, it contained the seeds of revolutions to come. To see why, notice the qualifier in the statement of Galileo’s principle of relativity: “by observing objects in motion.” For many years, it was thought that someday we would make other kinds of observations that would determine which inertial observers are really moving and which are really at rest. Einstein constructed his special theory of relativity simply by removing this qualifier. His principle of relativity states:

  There is no way to distinguish observers at rest from other inertial observers.

  And there’s more. A decade after special relativity, the principle of inertia was the seed for the next revolution—the discovery of general relativity. The principle was generalized by replacing “moving in a straight line with constant speed” to “moving along a geodesic in spacetime.” A geodesic is the generalization of a straight line in a curved geo
metry—it’s the shortest distance between two points. So now the principle of inertia reads:

  There is a special class of observers, relative to whom all free objects appear to move along geodesics in spacetime. These are observers who are in free fall in a gravitational field.

  And there is a consequent generalization:

  There is no way to distinguish observers in free fall from one another.

  This becomes Einstein’s equivalence principle, the core of his general theory of relativity.

  But is the principle of inertia really true? So far, it has been tested in circumstances where the energy of motion of a particle is as much as 11 orders of magnitude greater than its mass. This is pretty impressive, but there’s still a lot of room for the principle of inertia to fail. Only experiment can tell us whether it or its failure will be the core of revolutions in science to come.

  But whatever the outcome, it is the only explanation in science to have survived unscathed for so long, to have proved valid over such a range of scales, and to have sparked so many scientific revolutions.

  SEEING IS BELIEVING: FROM PLACEBOS TO MOVIES IN OUR BRAIN

  ERIC J. TOPOL

  Gary and Mary West Chair of Innovative Medicine and professor of translational genomics, Scripps Research Institute; author, The Creative Destruction of Medicine

  Our brain—with its 100 billion neurons and quadrillion synapses give or take a few billion here or there—is one of the most complex entities to demystify. And that may be a good thing, since we don’t necessarily want others reading our minds, which would take the recent megatrend of transparency much too far.

  But the use of functional magnetic resonance (fMRI) and positron emission tomography (PET) to image the brain and construct sophisticated activation maps validates the “Seeing is believing” aphorism for any skeptics. One of the longest controversies in medicine has been whether the placebo effect, a notoriously complex mind-body endproduct, has a genuine biological mechanism. That controversy now seems to be resolved with the recognition that the opioid-drug pathway—induced by drugs like morphine and oxycontin—shares the same brain-activation pattern as that of the administration of placebos for pain relief. And dopamine release from specific regions of the brain has been detected after administering a placebo to patients with Parkinson’s disease. Indeed, the upgrading of the placebo effect to include discrete, distinguishable psychobiological mechanisms has now prompted consideration of placebo medications as therapeutics—Harvard University recently set up a dedicated institute called the Program in Placebo Studies and the Therapeutic Encounter.

  The decoding of the placebo effect seems a step on the way to the more ambitious quest of mind-reading. In the summer of 2011, a group at the University of California–Berkeley produced, by reconstructing brain-imaging activation maps, a reasonable facsimile of the short YouTube movies shown to their experiment’s subjects.* In fact, it was inspiring and downright scary to see the resemblance of the frame-by-frame comparisons of the movies and what was reconstructed from the brain imaging.

  Couple this with the ongoing development of miniature portable MRIs and we may be on the way to watching our dreams in the morning on our iPad. Or, even more worrisome, revealing the movies in our brain to anyone interested in seeing them.

  THE DISCONTINUITY OF SCIENCE AND CULTURE

  GERALD HOLTON

  Mallinckrodt Professor of Physics and professor of the history of science, emeritus, Harvard University; coeditor (with Peter Galison and Silvan Schweber), Einstein for the 21st Century: His Legacy in Science, Art, and Modern Culture

  From time to time, large sections of humanity find themselves, at short notice, in a different universe. Science, culture, and society have undergone a tectonic shift, for better or worse—the rise of a powerful religious or political leader, the Declaration of Independence, the end of slavery—or, on the other hand, the fall of Rome, the Great Plague, the World Wars.

  So, too, in the world of art. Thus Virginia Woolf said famously, “On or about December 1910 human character changed,” owing, in her view, to the explosive exhibition of Post–Impressionist canvases in London that year. And after the discovery of the nucleus was announced, Wassily Kandinsky wrote: “The collapse of the atom model was equivalent, in my soul, to the collapse of the whole world. Suddenly, the thickest walls fell . . . ,” and he could turn to a new way of painting.

  Each of such worldview-changing occurrences tends to be deeply puzzling or anguishing. They are sudden fissures in the familiar fabric of history that ask for explanations, with treatises published year after year, each hoping to provide an answer, seeking the cause of the dismay.

  I will here focus on one such phenomenon.

  In 1611, John Donne published his poem The First Anniversary, containing the familiar lines “And new Philosophy calls all in doubt, / The Element of fire is quite put out;” and later, “ . . . Is crumbled out againe to his Atomies / ’Tis all in pieces, all coherence gone; / All just supply, and all Relation.” He and many others felt that the old order and unity had been displaced by relativism and discontinuity. The explanation for his anguish was an entirely unexpected event the year before: Galileo’s discovery of the fact that the moon has mountains, that Jupiter has moons, that there are immensely more stars than had been known.

  Of this happening and its consequent findings, the historian Marjorie Nicolson wrote: “We may perhaps date the beginning of modern thought from the night of January 7, 1610, when Galileo, by means of the instrument he developed [the telescope], thought he perceived new planets and new, expanded worlds.”*

  Indeed, by his work Galileo gave a deep and elegant explanation for the question of how our cosmos is arranged—no matter how painful this may have been to the Aristotelians and poets of his time. At last, the Copernican theory, formulated long ago, had more credibility. From this vast step forward, new science and new culture could be born.

  HORMESIS IS REDUNDANCY

  NASSIM NICHOLAS TALEB

  Distinguished Professor of Risk Engineering, NYU-Poly; author, The Black Swan

  Nature is the master statistician and probabilist. It follows a certain logic based on layers of redundancies, as a central risk-management approach. Nature builds with extra spare parts (two kidneys) and extra capacity in many, many things (say lungs, neural system, arterial apparatus, etc.), while designs by humans tend to be spare and overoptimized, and have the opposite attribute of redundancy—that is, leverage; we have a historical track record of engaging in debt, which is the reverse of redundancy ($50,000 in extra cash in the bank or, better, under the mattress, is redundancy; owing the bank an equivalent amount is debt).

  Now, remarkably, the mechanism called hormesis is a form of redundancy and statistically sophisticated in ways human science (so far) has failed us.

  Hormesis is when a bit of a harmful substance, or stressor, in the right dose or with the right intensity, stimulates the organism and makes it better, stronger, healthier, and prepared for a stronger dose the next exposure. That’s the reason we go to the gym, engage in intermittent fasting or caloric deprivation, or overcompensate for challenges by getting tougher. Hormesis lost some scientific respect, interest, and practice after the 1930s, partly because some people mistakenly associated it with the practice of homeopathy. The association was unfairly done, as the mechanisms are extremely different. Homeopathy relies on other principles, such as the one that minute, highly diluted parts of the agents of a disease (so small they can hardly be perceptible, hence cannot cause hormesis) could help medicate against the disease itself. It has shown little empirical backing and belongs today to alternative medicine, while hormesis, as an effect, has shown ample scientific evidence.

  Now it turns out that the logics of redundancy and overcompensation are the same—as if nature had a simple, elegant, and uniform style in doing things. If I ingest, say, 15 milligrams of a poisonous substance, my body will get stronger, preparing for 20, or more. Stressing my bones (
karate practice or carrying water on my head) will cause them to prepare for greater stress by getting denser and tougher. A system that overcompensates is necessarily in overshooting mode, building extra capacity and strength in anticipation for the possibility of a worse outcome, in response to information about the possibility of a hazard. This is a very sophisticated form of discovering probabilities via stressors. And of course such extra capacity or strength becomes useful—in itself—as opportunistic as it can be used to some benefit even in the absence of the hazard. Redundancy is an aggressive, not a defensive, approach to life.

  Alas, our institutional risk-management methods are vastly different. Current practice is to look in the past for the worst-case scenario, called a “stress test,” and adjust accordingly, never imagining that, just as the past experienced a large deviation that did not have a predecessor, such deviation might be insufficient. For instance, current systems take the worst historical recession, the worst war, the worst historical move in interest rates, the worst point in unemployment, etc., as an anchor for the worst future outcome. Many of us have been frustrated—very frustrated—by the method of stress testing in which people never go beyond what has happened before, and have even had to face the usual expression of naive empiricism (“Do you have evidence?”) when suggesting that we need to consider worse.

  And, of course, these systems don’t do the recursive exercise in their mind to see the obvious—that the worst past event itself did not have a predecessor of equal magnitude, and that someone using the past worst case in Europe before the Great War would have been surprised. I’ve called it the Lucretius underestimation, after the Latin poetic philosopher who wrote that the fool believes that the tallest mountain there is should be equal to the tallest one he has observed. Danny Kahneman has written, using as backup the works of Howard Kunreuther, that “protective actions, whether by individuals or by governments, are usually designed to be adequate to the worst disaster actually experienced. . . . Images of even worse disaster do not come easily to mind.”* For instance, in Pharaonic Egypt, scribes tracked the high-water mark of the Nile and used it as a worst-case scenario. No economist had tested the obvious: Do extreme events fall according to the past? Alas, back-testing says, “No, sorry.”

 

‹ Prev