Why, then, is MetSyn an epidemic? Numerous energy-adverse secular shifts include nutrient-poor/low-antioxidant/high-prooxidant pseudofoods (nutrients are needed for energy-production machinery, and antioxidants protect from oxidative stress, whose chief target is mitochondria, the energy-producing bits of cells), decline of regular balanced meals, and hypoglycemic-promoting macronutrient composition (simple carbohydrates without fat or protein engender unopposed insulin surges, and glucose drops). But a central factor is the explosion in our environment of oxidative stressors that disrupt function of (and DNA in) mitochondria. For example:
• Metals and heavy metals (mercury in fish, high-fructose corn syrup, broken lightbulbs; arsenic to promote poultry growth; aluminum vaccine adjuvants with proliferating childhood vaccinations)
• Plastics with bisphenol A
• Personal-care products (chemicals in sunscreens, lotions, hair dyes, cosmetics, detergents, fabric softeners and dryer sheets, conditioners)
• Cleaning products
• Furnishings and clothes with formaldehyde (pressboard, no-iron cotton)
• Petrochemicals, combustion products
• Electromagnetic fields (electronics, cell phones, smart-meters)
• Fire retardants (pajamas, bedding)
• Dry-cleaning chemicals
• Air “fresheners”
• Pesticides, herbicides (potent oxidative stressors, now routinely applied in homes and office buildings and at recreational sites)
• Termite tenting
• Prescription and over-the-counter drugs, including antibiotics—both direct exposure and through our food supply
• Antimicrobial soaps with active ingredients largely unfilterable from the water supply
• Air and water pollutants and contaminants
• Artificial ingredients in foods—transfats, artificial sweeteners, dyes, preservatives
The energy-deficit (“starving cell”) hypothesis accounts for scores of facts for which the prevailing view provides no insight. Observations deemed paradoxical in the standard view emerge seamlessly. The hypothesis makes testable predictions—for example, that many other oxidative stress-and mitochondrial-disruption-inducing exposures that have not yet been assessed will promote one or more elements of MetSyn. For factors that relate at both extremes to MetSyn—say, short and long sleep duration—the energy-disruptive one will prove to cause MetSyn and the energy-supportive one to serve as a fellow adaptive consequence.
This reframing addresses an important problem. Some think MetSyn is slated to reverse the gains we have made in longevity. The conclusion might be surprising—and should precipitate a revision in our thinking not just about the causes of MetSyn but also about its solutions.
DEATH IS THE FINAL REPAYMENT
EMANUEL DERMAN
Professor of professional practice, Columbia University; former managing director, Goldman Sachs; author, Models. Behaving. Badly.
“Sleep is the interest we have to pay on the capital which is called in at death; and the higher the rate of interest and the more regularly it is paid, the further the date of redemption is postponed.”
So wrote Arthur Schopenhauer, comparing life to finance in a universe that must keep its books balanced. At birth you receive a loan—consciousness and light borrowed from the void, leaving a hole in the emptiness. The hole will grow bigger each day. Nightly, by yielding temporarily to the darkness of sleep, you restore some of the emptiness and keep the hole from growing limitlessly. In the end, you must pay back the principal, complete the void, and return the life originally lent you.
By focusing on the common periodic nature of sleep and interest payments, Schopenhauer extends the metaphor of borrowing to life itself. Life and consciousness are the principal, death is the final repayment, and sleep is la petite mort, the periodic little death that renews.
DENUMERABLE INFINITIES AND MENTAL STATES
DAVID GELERNTER
Computer scientist, Yale University; chief scientist, Mirror Worlds Technologies; author, America-Lite: How Imperial Academia Dismantled Our Culture (and Ushered In the Obamacrats)
My favorites:
The 19th-century German mathematician Georg Cantor’s explanation of why all denumerable infinities are the same size—why, for example, the set of all integers is the same size as the set of all positive integers, or all even integers—and why some infinities are bigger than others. (The set of all rational numbers is the same size as the set of all integers, but the set of all real numbers—terminating plus nonterminating decimals—is larger.) The set of all positive integers is the same size as the set of all even, positive integers—to see that, just line them up, one by one. 1 is paired with 2 (the first even positive integer), 2 is paired with 4, 3 with 6, 4 with 8, and so on. You’d think there would be more positive integers than even ones; but this pairing-off shows that no positive integer will ever be left without a partner. (And so they all dance happily off and there are no wallflowers.) The other proofs are similar in their stunning simplicity, but much easier to demonstrate on a blackboard than describe in words.
Equally favorite: Philosopher John Searle’s proof that no digital computer can have mental states (a mental state is, for example, your state of mind when I say, “Picture a red rose” and you do)—that minds can’t be built out of software. A digital computer can do only trivial arithmetic and logical instructions. You can do them, too; you can execute any instruction that a computer can execute. You can also imagine yourself executing lots and lots of trivial instructions. Then ask yourself, “Can I picture a new mind emerging on the basis of my doing lots and lots and lots of trivial instructions?” No. Or imagine yourself sorting a deck of cards—sorting is the kind of thing digital computers do. Now imagine sorting a bigger and bigger and bigger deck. Can you see consciousness emerging at some point, when you sort a large enough batch? Nope.
And the inevitable answer to the inevitable first objection: But neurons only do simple signal transmission—can you imagine consciousness emerging out of that? This is an irrelevant question. The fact that lots of neurons make a mind has no bearing on the question of whether lots of anything else make a mind. I can’t imagine being a neuron, but I can imagine executing machine instructions. No mind emerges, no matter how many of those instructions I carry out.
INVERSE POWER LAWS
RUDY RUCKER
Mathematician, computer scientist; cyberpunk pioneer; novelist; author, Surfing the Gnarl
I’m intrigued by the empirical fact that most aspects of our world and our society are distributed according to so-called inverse power laws. That is, many distribution curves take on the form of a curve that swoops down from a central peak to have a long tail that asymptotically hugs the horizontal axis.
Inverse power laws are elegantly simple and deeply mysterious, but more galling than beautiful. Inverse power laws are self-organizing and self-maintaining. For reasons that aren’t entirely understood, they emerge spontaneously in a wide range of parallel computations, both social and natural.
One of the first social scientists to notice an inverse power law was the philologist George Kingsley Zipf, who formulated an observation now known as Zipf’s Law. This is the statistical fact that in most documents the frequency with which a given word is used is roughly proportional to the reciprocal of the word’s popularity rank. Thus, the second most popular word is used half as much as the most popular word, the tenth most popular word is used a tenth as much as the most popular word, and so on.
In society, similar kinds of inverse power laws govern society’s rewards. Speaking as an author, I’ve noticed, for instance, that the hundredth most popular author sells a hundredfold fewer books than the author at the top. If the top writer sells a million copies, somone like me might sell 10,000.
Disgruntled scribes sometimes fantasize about a utopian marketplace in which the naturally arising inverse-power-law distribution would be forcibly replaced by a linear distributi
on—that is, a sales schedule that lies along a smoothly sloping line instead of taking the form of the present bent curve that starts at an impudently high peak and swoops down to dawdle along the horizontal axis.
But there’s no obvious way that the authors’ sales curve could be changed. Certainly there’s no hope of having some governing group try to force a different distribution. After all, people make their own choices as to what books to read. Society is a parallel computation, and some aspects of it are beyond control.
The inverse-power-law aspects of income distribution are particularly disturbing. Thus the second wealthiest person in a society might own half as much as the richest, with the tenth richest person possessing only a tenth as much, and—out in the ’burbs—the thousandth richest person is making only one-thousandth as much as the person on the top.
Putting the same phenomenon a little more starkly, while a company’s chief executive officer might earn $100,000,000 a year, a software engineer at the same company might earn only $100,000 a year, and a worker in one of the company’s overseas assembly plants might earn only $10,000 a year—a ten-thousandth as much as the top exec.
Power-law distributions can also be found in the opening weekend grosses of movies, in the number of hits that Web pages get, and in the audience shares for TV shows. Is there some reason the top ranks do so overly well and the bottom ranks seem so unfairly penalized? The short answer is no, there’s no real reason. There need be no conspiracy to skew the rewards. Galling as it seems, inverse-power-law distributions are a fundamental natural law about the behavior of systems. They’re ubiquitous.
Inverse power laws aren’t limited to societies—they also dominate the statistics of the natural world. The tenth smallest lake is likely to be a tenth as large as the biggest one, the hundredth largest tree in a forest may be a hundredth as big as the largest tree, the thousandth largest stone on a beach is a thousandth the size of the largest one.
Whether or not we like them, inverse power laws are as inevitable as turbulence, entropy, or the law of gravity. This said, we can somewhat moderate them in our social context, and it would be too despairing to say we have no control whatsoever over the disparities between our rich and our poor.
But the basic structures of inverse-power-law curves will never go away. We can rail at an inverse power law if we like—or we can accept it, perhaps hoping to bend the harsh law toward not so steep a swoop.
HOW THE LEOPARD GOT HIS SPOTS
SAMUEL ARBESMAN
Applied mathematician; senior scholar, Ewing Marion Kauffman Foundation
In one of his celebrated just-so stories, Rudyard Kipling recounted how the leopard got his spots. Taking this approach to its logical conclusion, we would need distinct stories for every animal’s pattern: the leopard’s spots, the cow’s splotches, the panther’s solid color. And we would have to add even more stories for the complex patterning of everything from molluscs to tropical fish.
But far from these different animals requiring separate and distinct explanations, there is a single underlying explanation that shows how we can get all of these different patterns using a single unified theory.
Beginning in 1952, with Alan Turing’s publication of a paper entitled “The Chemical Basis of Morphogenesis,” scientists recognized that a simple set of mathematical formulas could dictate the variety of ways that patterns and colorings form in animals. This model is known as a reaction-diffusion model and works in a simple way: Imagine that you have multiple chemicals, which diffuse over a surface at different rates and can interact. Whereas in most cases diffusion simply creates a uniformity of a given chemical—think how cream poured into coffee will eventually spread and dissolve and create a lighter brown liquid—when multiple chemicals diffuse and interact it can give rise to nonuniformity. Although this is somewhat counterintuitive, it not only occurs but also can be generated using only a simple set of equations—thus explaining the exquisite variety of patterns seen in the animal world.
Mathematical biologists have been exploring the properties of reaction-diffusion equations ever since Turing’s paper. They’ve found that varying the parameters can generate the animal patterns we see. Some mathematicians have examined the ways in which the size and shape of the surface can dictate the patterns we see; as the size parameter is modified, we can easily go from such patterns as giraffe-like to those seen on Holstein cows.
This elegant model can even yield simple predictions. For example, whereas a spotted animal can have a striped tail (and very often does) according to the model, a striped animal will never have a spotted tail. And this is exactly what we see! These equations can generate the endless variation seen in nature but also show the limitations inherent in biology. The just-so of Kipling may be safely exchanged for the elegance and generality of reaction-diffusion equations.
THE UNIVERSAL ALGORITHM FOR HUMAN DECISION MAKING
STANISLAS DEHAENE
Neuroscientist, Collège de France; author, Reading in the Brain: The New Science of How We Read
The ultimate goal of science, as the French physicist Jean Baptiste Perrin once stated, should be “to substitute visible complexity for an invisible simplicity.” Can human psychology achieve this ambitious goal: the discovery of elegant rules behind the apparent variability of human thought? Many scientists still consider psychology a “soft” science, whose methods and object of study are too fuzzy, too complex, and too suffused with layers of cultural complexity to ever yield elegant mathematical generalizations. Yet cognitive scientists know that this prejudice is wrong. Human behavior obeys rigorous laws of the utmost mathematical beauty and even necessity. I will nominate just one of them: the mathematical law by which we make our decisions.
All of our mental decisions appear to be captured by a simple rule that weaves together some of the most elegant mathematics of the past centuries: Brownian motion, Bayes’ Law, and the Turing machine. Let’s start with the simplest of all decisions: How do we decide that 4 is smaller than 5? Psychological investigation reveals many surprises behind this simple feat. First, our performance is slow: The decision takes us nearly half a second, from the moment the digit 4 appears on a screen to the point when we respond by clicking a button. Second, our response time is highly variable from trial to trial, anywhere from 300 milliseconds to 800 milliseconds, even though we are responding to the very same digital symbol, “4.” Third, we make errors—it sounds ridiculous, but even when comparing 4 with 5 we sometimes make the wrong decision. Fourth, our performance varies with the meaning of the objects: We are much faster and make fewer errors when the numbers are far from each other (such as 1 and 5) than when they are close (such as 4 and 5).
All of the above facts, and many more, can be explained by a single law: Our brain makes decisions by accumulating the available statistical evidence and committing to a decision whenever the total exceeds a threshold.
Let me unpack this statement. The problem the brain faces when making a decision is one of sifting the signal from the noise. The input to any of our decisions is always noisy: Photons hit our retina at random times, neurons transmit the information with partial reliability, and spontaneous neural discharges (spikes) are emitted throughout the brain, adding noise to any decision. Even when the input is a digit, neuronal recordings show that the corresponding quantity is coding by a noisy population of neurons that fires at semi-random times, with some neurons signaling “I think it’s 4,” others “it’s close to 5” or “it’s close to 3,” and so on. Because the brain’s decision system sees only unlabeled spikes, not full-fledged symbols, separating the wheat from the chaff is a genuine problem for it.
In the presence of noise, how should one make a reliable decision? The mathematical solution to that problem was first addressed by Alan Turing, when he was cracking the Enigma code at Bletchley Park. Turing found a small glitch in the Enigma machine, which meant that some of the German messages contained small amounts of information, but unfortunately too little for
him to be certain of the underlying code. He realized that Bayes’ Law could be exploited to combine all the independent pieces of evidence. Skipping the math, Bayes’ Law provides a simple way to sum all of the successive hints, plus whatever prior knowledge we have, in order to obtain a combined statistic that tells us what the total evidence is.
With noisy inputs, this sum fluctuates up and down, as some incoming messages support the conclusion while others merely add noise. The outcome is what mathematicians call a random walk—a fluctuating march of numbers as a function of time. In our case, however, the numbers have a currency: They represent the likelihood that one hypothesis is true (e.g., the probability that the input digit is smaller than 5). Thus, the rational thing to do is to act as a statistician and wait until the accumulated statistic exceeds a threshold probability value. Setting it to p = 0.999 would mean that we have 1 chance in 1,000 to be wrong.
This Explains Everything Page 31