We don’t need to be 100 percent sure that the worst fears of climate scientists are correct in order to act. All we need to think about are the consequences of being wrong.
Let’s assume for a moment that there is no human-caused climate change, or that the consequences are not dire, and we’ve made big investments to avert it. What’s the worst that happens? In order to deal with climate change:
We’ve made major investments in renewable energy. This is an urgent issue even in the absence of global warming, as the International Energy Agency has now revised the date of “peak oil” to 2020, only eight years from now.
We’ve invested in a potent new source of jobs.
We’ve improved our national security by reducing our dependence on oil from hostile or unstable regions.
We’ve mitigated the enormous off-the-books economic losses from pollution. (China recently estimated these losses as 10 percent of GDP.) We currently subsidize fossil fuels in dozens of ways, by allowing power companies, auto companies, and others to keep environmental costs off the books, by funding the infrastructure for autos at public expense while demanding that railroads build their own infrastructure, and so on.
We’ve renewed our industrial base, investing in new industries rather than propping up old ones. Climate skeptics like Bjorn Lomborg like to cite the cost of dealing with global warming. But these costs are similar to the “costs” incurred by record companies in the switch to digital-music distribution, or the “costs” to newspapers implicit in the rise of the Web. That is, they are costs to existing industries, but they ignore the opportunities for new industries that exploit the new technology. I have yet to see a convincing case made that the costs of dealing with climate change aren’t principally the costs of protecting old industries.
By contrast, let’s assume that the climate skeptics are wrong. We face the displacement of millions of people, droughts, floods and other extreme weather, species loss, and economic harm that will make us long for the good old days of the current financial-industry meltdown.
Climate change really is a modern version of Pascal’s wager. On one side, the worst outcome is that we’ve built a more robust economy. On the other, the worst outcome really is Hell. In short, we do better if we believe in climate change and act on that belief, even if we turn out to be wrong.
But I digress. The illustration has become the entire argument. Pascal’s wager is not just for mathematicians, nor for the religiously inclined. It is a useful tool for any thinking person.
EVOLUTIONARILY STABLE STRATEGIES
S. ABBAS RAZA
Founding editor, 3quarksdaily.com
My example of a deep, elegant, beautiful explanation in science is John Maynard Smith’s concept of an evolutionarily stable strategy (ESS). Not only does this wonderfully straightforward idea explain a whole host of biological phenomena, but it also provides a useful heuristic tool to test the plausibility of various types of claims in evolutionary biology—allowing us, for example, to quickly dismiss group-selectionist misconceptions, such as the idea that altruistic acts by individuals can be explained by the benefits that accrue to the species as a whole. Indeed, the idea is so powerful that it explains things I didn’t even realize needed explaining until I was given the explanation.
I will now present one such explanation to illustrate the power of ESS. I should note that while Smith developed ESS using the mathematics of game theory (along with collaborators G. R. Price and G. A. Parker), I will attempt to explain the main idea using almost no math.
Think of common animal species like cats, or dogs, or humans, or golden eagles. Why do all of them have (nearly) equal numbers of males and females? Why is there not sometimes 30 percent males and 70 percent females in a species? Or the other way around? Or some other ratio altogether? Why are sex ratios almost exactly fifty-fifty? I, at least, never entertained the question until I read the elegant answer.
Let’s consider walruses: They exist in the normal fifty-fifty sex ratio, but most walrus males will die virgins, whereas almost all females will mate. Only a few dominant walrus males monopolize most of the females. So what’s the point of having all those extra males around? They take up food and resources, but in the only thing that matters to evolution they are useless, because they do not reproduce. From a species point of view, it would be better and more efficient if only a small proportion of walruses were males and the rest were females; such a species of walrus would make much more efficient use of its resources and, according to the logic of group-selectionists, would soon wipe out the actual existing species of walrus with the inefficient fifty-fifty gender ratio. So why hasn’t that happened?
Here’s why: because a population of walruses (you can substitute any other species of animals I’ve mentioned, including humans, for the walruses in this example) with, say, 10 percent males and 90 percent females (or any other non–fifty-fifty ratio) would not be stable over a large number of generations. Why not? In the 10 percent males and 90 percent females of this example, each male is producing about nine times as many children as any female—by successfully mating with, on average, nine females. Imagine such a population. If you were a male in this kind of population, it would be to your evolutionary advantage to produce more sons than daughters, because each son could be expected to produce roughly nine times as many offspring as any of your daughters. Let me run through some numbers to make this clearer: Suppose the average male walrus fathers ninety children, only nine of which will be males and eighty-one females, on average, and the average female walrus bears ten baby walruses, only one of which will be a male and nine of which will be females. OK?
Here’s the crux of the matter: Suppose a mutation arose in one of the male walruses—as well it might over a large number of generations—that gave this particular male walrus more Y (male-producing) sperm than X (female-producing) sperm. This gene would spread like wildfire through the described population. Within a few generations, more and more male walruses would have the gene that makes them have more male than female offspring, and soon you would get the fifty-fifty ratio we see in the real world.
The same argument applies to females: Any mutation in a female that caused her to produce more male than female offspring (though sex is determined by the sperm, not the egg, there are other mechanisms the female might employ to affect the sex ratio) would spread quickly in this population, bringing the ratio closer to fifty-fifty with each subsequent generation. In fact, any significant deviation from the fifty-fifty gender ratio will, for this reason, be evolutionarily unstable and through random mutation will soon revert to it. And this is just one example of the deep, elegant, and beautiful explanatory power of ESS.
THE COLLINGRIDGE DILEMMA
EVGENY MOROZOV
Journalist; visiting scholar, Stanford University; Schwartz Fellow, New America Foundation; author, The Net Delusion: The Dark Side of Internet Freedom
In 1980, David Collingridge, an obscure academic at the University of Aston in the UK, published an important book called The Social Control of Technology, which set the tone of many subsequent debates about technology assessment. In it, he articulated what has become known as the Collingridge dilemma—the idea that there is always a tradeoff between knowing the impact of a given technology and the ease of influencing its social, political, and innovation trajectories.
Collingridge’s basic insight was that we can successfully regulate a given technology when it’s still young and unpopular and thus probably still hiding its unanticipated and undesirable consequences—or we can wait and see what those consequences are, but then risk losing control over its regulation. Or as Collingridge himself so eloquently put it: “When change is easy, the need for it cannot be foreseen; when the need for change is apparent, change has become expensive, difficult, and time-consuming.” The Collingridge dilemma is one of the most elegant ways to explain many of the complex ethical and technological quandaries—think drones or automated facial recognition—that plague o
ur globalized world.
TRUSTING TRUST
ERNST PÖPPEL
Psychologist; neuroscientist; CEO, Human Science Center, Munich University; author, Mindworks: Time and Conscious Experience
After many years
A little gift to Edge
From the first culture.
Using the haiku
Five/seven/five syllables
To express a thought.
Searching for beauty
To explain the unexplained
Why should I do this?
What is my problem?
I don’t need explanations!
I’m happy without!
A new morning comes.
I wake up leaving my dreams,
And I don’t know why.
I don’t understand
Why I can trust my body
In day and in night.
Looking at the moon,
Always showing the same face,
But I don’t know why!
Must I explain this?
Some people certainly can.
Beyond my power!
I look at a tree.
But is there in fact a tree?
I trust in my eyes.
But why do I trust?
Not understanding my brain
Being too complex.
Looking for answers,
Searching for explanations,
But living without.
Trust in my percepts
And trust in my memories.
Trust in my feelings.
Where does it come from,
This absolute certainty,
This trust in the world?
Trusting in the future,
Making plans for tomorrow,
Why do I believe?
I have no answer!
Knowledge is not sufficient.
Only questions count.
What is a question?
That is the real challenge!
Finding a new path.
But trust is required.
Believing the new answers,
Hiding in a shadow.
Deep explanations.
Rest in the trust of answers,
Which is unexplained.
Is there a way out?
Evading the paradox?
This answer is no!
The greatest challenge:
Accepting the present,
Giving no answers!
IT JUST IS?
BRUCE PARKER
Visiting professor, Center for Maritime Systems, Stevens Institute of Technology; oceanographer; author, The Power of the Sea: Tsunamis, Storm Surges, Rogue Waves, and Our Quest to Predict Disasters
The concept of an indivisible component of matter, something that cannot be divided further, has been around for at least two and a half millennia, first proposed by early Greek and Indian philosophers. Democritus called the smallest indivisible particle of matter átomos, meaning “uncuttable.” Atoms were also simple, eternal, and unalterable. But in Greek thinking (and generally for about 2,000 years after), atoms lost out to the four basic elements of Empedocles—fire, air, water, earth—which were also simple, eternal, and unalterable but not made up of little particles, Aristotle believing those four elements to be infinitely continuous.
Further progress in our understanding of the world, based on the concept of atoms, had to wait until the 18th century. By that time, the four elements of Aristotle had been replaced by the thirty-three elements of Lavoisier, based on chemical analysis. Dalton then used the concept of atoms to explain why elements always react in ratios of whole numbers, proposing that each element is made up of atoms of a single type and that these atoms can combine to form chemical compounds. Of course, by the early 20th century (through the work of Thomson, Rutherford, Bohr, and many others), it was realized that atoms were not indivisible and thus not the basic units of matter. All atoms were made up of protons, neutrons, and electrons, which took over the title of being the indivisible components (basic building blocks) of matter.
Perhaps because the Rutherford-Bohr model of the atom is now considered transitional to more elaborate models based on quantum mechanics, or perhaps because it evolved over time from the work of many people (and wasn’t a single beautiful proposed law), we have forgotten how much about the world can be explained by the concept of protons, neutrons, and electrons—probably more than any other theory ever proposed. With only three basic particles, one could explain the properties of 118 atoms/elements and the properties of thousands upon thousands of compounds chemically combined from those elements. A rather amazing feat, and certainly making the Rutherford-Bohr model worthy of being considered a “favorite deep, elegant, and beautiful explanation.”
Since that great simplification, further developments in our understanding of the physical universe have gotten more complicated, not less. To explain the properties of our three basic particles of matter, we went looking for even-more-basic particles. We ended up needing twelve fermions (six quarks, six leptons) to “explain” the properties of the three previously thought-to-be basic particles (as well as the properties of some other particles that were not known to us until we built high-energy colliders). And we added four other particles, force-carrier particles, to “explain” the four fundamental force fields (electromagnetism, gravitation, strong nuclear interaction, and weak nuclear interaction) that affect those three previously thought-to-be basic particles. Of these sixteen now thought-to-be basic particles, most are not independently observable (at least at low energies).
Even if the present Standard Model of Particle Physics turns out to be true, the question can be asked: “What next?” Every particle, whatever its level in the hierarchy, will have certain properties or characteristics. When we are asked why quarks have a particular electric charge, color charge, spin, or mass, do we simply say, “They just do”? Or do we try to find even-more-basic particles that seem to explain the properties of quarks, leptons, and bosons? And if so, does this continue to still-even-more-basic particles? Could it go on forever? Or at some point, when asked, “Why does this particle have these properties?,” would we simply say, “It just does”? At some point, would we have to say that there is no “why” to the universe? It just is.
At what level of our hierarchy of understanding would we resort to saying, “It just is”? The first level (with the least amount of understanding about the world) is religious: the gods of Mount Olympus, each responsible for some worldly phenomenon, or the all-knowing monotheistic god creating the world and making everything work by means truly unknowable to humans. In their theories about how the world worked, Aristotle and other Greek philosophers incorporated the Olympian gods (earth, water, fire, and air were all assigned to particular gods), but Democritus and other philosophers were deterministic and materialistic, and they looked for predictable patterns and simple building blocks that might create the complex world they saw around them. In the evolution of scientific thinking, there have been various “It just is” moments, when an explanation or theory seems to hit a wall, until someone comes along and says, “Maybe not” and goes on to advance our understanding. But as we get to the most basic questions about our universe (and our existence), the “It just is” answer becomes more likely. One basic scientific question is whether truly indivisible particles of nature will ever be found. The accompanying philosophical question is whether there can be truly indivisible particles of nature.
At some level, the next group of mathematically derived “particles” may so obviously appear not to be observable/“real” that we will describe them instead as simply entities in a mathematical model that seems to accurately describe the properties of the observable particles in the level above. At which point, the answer to the question of why these particles act as described by this mathematical model would be “They just do.” How far down we go with such models will probably depend on how much a new level in the model allows us to explain previously unexplainable observed phe
nomena or correctly predict new phenomena. (Or perhaps we might be stopped by the model’s becoming too complex.)
For determinists still unsettled by the probabilities inherent in quantum mechanics or the philosophical question about what would have come before a Big Bang, it’s just one more step toward recognizing the true unsolvable mystery of our universe—recognizing it, but maybe still not accepting it. A new and much better model could still come along.
SUBVERTING BIOLOGY
PATRICK BATESON
Professor of ethology, Cambridge University; coauthor (with Paul Martin), Design for a Life
Two years ago, I reviewed the evidence on inbreeding in pedigreed dogs. Inbreeding can result in reduced fertility both in litter size and sperm viability, developmental disruption, lower birth rate, higher infant mortality, shorter life span, increased expression of inherited disorders, and reduced immune-system function. The immune system is closely linked to the removal of cancer cells from a healthy body, and, indeed, reduced immune-system function increased the risk of full-blown tumors. These well-documented cases in domestic dogs confirm what is known from many wild populations of other species. It comes as no surprise, therefore, that a variety of mechanisms render inbreeding less likely in the natural world. One such is the choice of unfamiliar individuals as sexual partners.
Despite all the evidence, the story is more complicated than at first appears, and this is where the explanation for what happens has a certain beauty. While inbreeding is generally seen as undesirable, the debate has become much more nuanced in recent years. Purging of genes with seriously damaging effects carries obvious benefits, and this can happen when a population is inbred. Outcrossing, which is usually perceived as advantageous, does carry the possibility that the benefits of purging are undone by introducing new harmful genes into a population. Furthermore, a population adapted to one environment may not do well if crossed with a population adapted to another. So a balance is often struck between inbreeding and outbreeding.
This Explains Everything Page 22