When the life history of a species demands careful nurturing of the offspring, the parents may go to a lot of trouble to mate with the best partner possible. A mate should be not too similar to oneself, but not too dissimilar either. Japanese quail of both sexes prefer first cousins as partners. Subsequent animal studies have suggested that an optimal degree of relatedness is most beneficial to the organism in terms of reproductive success. A study of a human Icelandic population also points to the same conclusion. Couples who are third or fourth cousins have a larger number of grandchildren than more closely or more distantly related partners. Much evidence from humans and nonhuman animals suggests that the choice of a mate depends on experiences in early life, with individuals tending to choose partners who are a bit different but not too different from familiar individuals, who are usually but not always close kin.
The role of early experience in determining sexual and social preferences bears on a well-known finding that humans are extremely loyal to members of their own group. They are even prepared to give up their own lives in defense of those with whom they identify. In sharp contrast, they can behave with lethal aggressiveness toward those who are unfamiliar to them. This suggests, then, a hopeful resolution to the racism and intolerance that bedevils many societies. As people from different countries and ethnic backgrounds become better acquainted with one another, they will be more likely to treat one another well, particularly if the familiarity starts at an early age. If familiarity leads to marriage, the couples may have fewer grandchildren, but that may be a blessing on an overpopulated planet. This optimistic principle, generated by knowledge of how a balance has been struck between inbreeding and outbreeding, subverts biology—but it does hold, for me, considerable beauty.
SEX AT YOUR FINGERTIPS
SIMON BARON-COHEN
Psychologist; director, Autism Research Centre, Cambridge University; author, The Science of Evil: On Empathy and the Origins of Cruelty
We all know males and females are different below the neck. There’s growing evidence that there are differences above the neck, too. Looking into the mind reveals that, on average, females develop empathy faster—and, on average, males develop stronger interests in systems, or how things work. These are differences not so much in ability as in cognitive style and patterns of interest. They shouldn’t stand in the way of achieving equal opportunities in society or equal representation in all disciplines and fields, but such political aspirations are a separate issue from the scientific observation of cognitive differences.
Looking into the brain also reveals differences. For example, whereas males, on average, have larger brain volume than females, even correcting for height and weight, females on average reach their peak volume of gray and white matter at least a year earlier than males. There’s also a difference in the number of neurons in the neocortex: On average, males have 23 million and females 19 million, a 16 percent difference. Looking at other brain regions also shows sex differences: For example, males, on average, have a larger amygdala (an emotion area) and females, on average, a larger planum temporale (a language area). But in all this talk about sex differences, ultimately what we want to know is what gives rise to these differences, and here is where I, at least, enjoy some deep, elegant, and beautiful explanations.
My favorite is fetal testosterone, since extra drops of this special molecule seem to have “masculinizing” effects on the development of the brain and the mind. The credit for this simple idea must go to Charles Phoenix and colleagues at the University of Kansas who proposed it in 1959* and to Norman Geschwind and Albert Galaburda at Harvard, who picked it up in the early 1980s. This is not the only masculinizing mechanism (another is the X chromosome), but it is one that has been elegantly dissected.
However, scientists who study the causal properties of fetal testosterone sometimes resort to unethical animal experiments. Take, for example, a part of the amygdala called the medial posterodorsal (MePD) nucleus, which is larger in male rats than in females. If you castrate the poor male rat (thereby depriving him of the main source of his testosterone), the MePD shrinks to the female volume in just four weeks. Or you can do the reverse experiment, giving extra testosterone to a female rat, which makes her MePD grow to the same size as a typical male rat, again in just four weeks.
In humans, we look for more ethical ways of studying how fetal testosterone does its work. You can measure this special hormone in the amniotic fluid that bathes the fetus in the womb. It gets into the amniotic fluid by being excreted by the fetus and so is thought to reflect the levels of this hormone in the baby’s body and brain. My Cambridge colleagues and I measured unborn male babies’ testosterone in this way and then invited them into an MRI brain scanner some ten years later. In a recent paper in the Journal of Neuroscience, our group shows, for example, that the more testosterone there is in the amniotic fluid, the less gray matter in the planum temporale.*
This fits with an earlier finding we published, that the more testosterone in the amniotic fluid, the smaller the child’s vocabulary size at age two.* This helps make sense of a longstanding puzzle about why girls talk earlier than boys and why boys are disproportionately represented in clinics for language delays and disorders, since boys in the womb produce at least twice as much testosterone as girls.
It also helps make sense of the puzzle of individual differences in rate of language development in typical children regardless of their sex: why at two years old some children have huge vocabularies (600 words) and others haven’t even started talking. Fetal testosterone is not the only factor involved in language development—so are social influences, since firstborn children develop language faster than later children—but it seems to be a key part of the explanation. And fetal testosterone has been shown to be associated with a host of other sex-linked features, from eye contact to empathy and from detailed attention to autistic traits.
Fetal testosterone is tricky to get your hands on, since the last thing a scientist wants to do is interfere with the delicate homeostasis of the uterine environment. In recent years, a proxy for fetal testosterone has been proposed: the ratio between the second-and fourth-finger lengths, or the 2D:4D ratio. Males have a lower ratio than females in the population, and this is held to be set in the womb and to remain stable throughout one’s life. So scientists no longer have to think of imaginative ways to measure the testosterone levels directly in the womb. They can simply take a xerox of someone’s hand, palm down, at any time in their life to measure a proxy for levels of testosterone in the womb.
I was skeptical of the 2D:4D measure for a long time, simply because it made little sense that the relative length of your second and fourth fingers should have anything to do with your hormones prenatally. But just last year, in Proceedings of the National Academy of Sciences, Zheng and Cohn showed that even in mice paws, the density of receptors for testosterone and estrogen varies in the second and fourth digits, making a beautiful explanation for why your finger-length ratio is directly affected by these hormones.* The same hormone that masculinizes your brain is at work at your fingertips.
WHY DO MOVIES MOVE?
ALVY RAY SMITH
Cofounder, Pixar; digital imagery pioneer
Movies are not smooth. The time between frames is empty. The camera records only twenty-four snapshots of each second of time flow and discards everything that happens between frames—but we perceive it anyway. We see stills, but we perceive motion. How can we explain this? We can ask the same question about digital movies, videos, and videogames—in fact, all modern digital media—so the explanation is rather important, and one of my favorites.
Hoary old “persistence of vision” can’t be the explanation. It’s real, but it explains only why you don’t see the emptiness between frames. If an actor or an animated character moves between frames, then—by persistence of vision—you should see him in both positions: two Humphrey Bogarts, two Buzz Lightyears. In fact, your retinas do see both, one fading out as the other c
omes in—each frame is projected long enough to ensure this. It’s what your brain does with the retinas’ information that determines whether you perceive two Bogarts in two different positions or one Bogart moving.
On its own, the brain perceives the motion of an edge, but only if the edge moves not too far, and not too fast, from the first frame to the second. Like persistence of vision, this is a real effect, called apparent motion. It’s interesting, but it’s not the explanation I like so much. Classic cel animation—of the old ink-on-celluloid variety—relies on the apparent-motion phenomenon. The old animators knew intuitively how to keep the successive frames of a movement inside the “not too far, not too fast” boundaries. If they needed to exceed those limits, they had tricks to help us perceive the motion—like actual speed lines and a poof of dust to mark the rapid descent of Wile E. Coyote as he steps unexpectedly off a mesa in hot pursuit of that truly wily Road Runner.
Exceed the apparent-motion limits without those animators’ tricks and the results are ugly. You may have seen old-school stop-motion animation—such as Ray Harryhausen’s classic sword-fighting skeletons in Jason and the Argonauts—plagued by an unpleasant jerking motion of the characters. You’re seeing double—several edges of a skeleton at the same time—and correctly interpreting it as motion, but painfully so. The edges stutter, or “judder,” or “strobe” across the screen—words that reflect the pain inflicted by staccato motion.
Why don’t live-action movies judder? (Imagine directing Uma Thurman to stay within not-too-far-not-too-fast limits.) Why don’t computer-animated movies à la Pixar judder? And, for contrast, why do videogames, alas, strobe horribly? All are sequences of discrete frames. There’s a general explanation that works for all three. It’s called motion blur, and it’s simple and pretty.
Here’s what a real movie camera does. The frame it records is not a sample at a single instant, like a Road Runner or a Harryhausen frame. Rather, the camera shutter is open for a short while, called the exposure time. A moving object is moving during that short interval, of course, and is thus smeared slightly across the frame during the exposure time. It’s like what happens when you try to take a long-exposure still photo of your child throwing a ball, and his arm is just a blur. But a bug in a still photograph turns out to be a feature for movies. Without the blur, all movies would look as jumpy as Harryhausen’s skeletons.
A scientific explanation can become a technological solution. For digital movies—like Toy Story—the solution to avoid strobing was derived from the explanation for live-action: Deliberately smear a moving object across a frame along its path of motion. So a character’s swinging arm must be blurred along the arc the arm traces as it pivots around its shoulder joint. And the other arm independently must be blurred along its arc, often in the opposite direction to the first arm. All that had to be done was to figure out how to do with a computer what a camera does—and, importantly, how to do it efficiently. Live-action movies get motion blur for free, but it costs a lot for digital movies. The solution—by the group now known as Pixar—paved the way for the first digital movie. Motion blur was the crucial breakthrough.
In effect, motion blur shows your brain the path a movement is taking and also its magnitude—a longer blur means a faster motion. Instead of discarding the temporal information about motion between frames, we store it spatially in the frames as a blur. A succession of such frames overlapping a bit—because of persistence of vision—thus paints out a motion in a distinctive enough way that the brain can make the full inference.
Pixar throws thousands of computers at a movie—spending sometimes more than thirty hours on a single frame. On the other hand, a videogame—essentially a digital movie restricted to real time—has to deliver a new frame every thirtieth of a second. It was only seventeen years ago that the inexorable increase in computation speed per unit dollar (described by Moore’s Law) made motion-blurred digital movies feasible. Videogames simply haven’t arrived yet. They can’t compute fast enough to motion-blur. Some give it a feeble try, but the feel of the play lessens so dramatically that gamers turn it off and suffer the judder instead. But Moore’s Law still applies, and soon—five years? ten?—even videogames will motion-blur properly and fully enter the modern world.
Best of all, motion blur is just one example of a potent general explanation called the sampling theorem. The theorem works when the samples are frames, taken regularly in time to make a movie, or when they’re pixels, taken regularly in space to make an image. It works for digital audio, too. In a nutshell, the explanation of smooth motion from unsmooth movies expands to explain the modern media world—why it’s even possible. But that would take a longer explanation.
WOULD YOU LIKE BLUE CHEESE WITH IT?
ALBERT-LÁSZLÓ BARABÁSI
Complex network scientist; Distinguished Professor and director of Northeastern University’s Center for Complex Network Research; author, Bursts: The Hidden Pattern Behind Everything We Do
It would take about 100 years to try the 100,000 recipes carried on Epicurious, the largest recipe portal in the United States. What fascinates me about this number is not how huge it is but how tiny. Indeed, a typical dish has about eight ingredients. Thus the roughly 300 ingredients used in cooking today allow for about a quadrillion distinct dishes. Add to this your choice of deep-freezing, frying, smashing, centrifuging, or blasting your ingredients and you start to see why cooking is a growth industry. It currently uses only a negligible fraction of its resources—less than one in a trillion dishes that culinary combinatorics permits.
Don’t you like green eggs and ham? Or why leave this vast terra incognita unexplored? Do we simply lack the time to taste our way through this boundless bounty, or is it because most combinations are repugnant? Could there be some rules that explain liking some ingredient combinations and avoiding others? The answer appears to be yes, which leads me to my most flavorful explanation to date.
As we search for evidence to support (or refute) any “laws” that may govern our culinary experiences, we must bear in mind that food sensation is affected by many factors, from color to texture, from temperature to sound. Yet palatability is largely determined by flavor, representing a group of sensations including odors, tastes, freshness, and pungency. This is mainly chemistry, however. Odors are molecules that bind olfactory receptors, tastes are chemicals that stimulate taste buds, freshness or pungency are signaled by chemical irritants in our mouth and throat. Therefore, if we want to understand why we prize some ingredient combinations and loathe others, we have to look at the chemical profile of our recipes.
But how can chemistry tell us which ingredients taste well together? Well, we can formulate two orthogonal hypotheses. First, we may like some ingredients together because their chemistry (henceforth, their flavor) is complementary—what one lacks is provided by the other. The alternative is the polar opposite: Taste is like color matching in fashion—we prefer to pair ingredients that already share some flavor compounds, bringing them in chemical harmony with one another. Before you go on reading, I urge you to stop for a second and ponder which of these you find more plausible.
The first one makes more sense to me: I put salt in my omelet not because the chemical bouquet of the egg shares the salt’s only chemical, NaCl, but precisely because it is missing it. Yet lately, chefs and molecular gastronomers are betting on the second hypothesis, and they have even given it a name, calling it the food-pairing principle. Its consequences are already on your table. Some contemporary restaurants serve white chocolate with caviar because they share trimethylamine and other flavor compounds, or chocolate and blue cheese because they share at least seventy-three flavor compounds. Yet evidence for food pairing is at best anecdotal, making a scientist like myself ask: Is this more than a myth?
So whom should I trust, my intuition or the molecular gastronomers? And how to really test if two ingredients indeed go well together? Our first instinct was to taste, under controlled conditions, all ingredie
nt pairs. Yet 300 ingredients offered about 44,850 pairs to sample, forcing us to search for smarter ways to settle the question. Having spent the last decade trying to understand the laws governing networks, from the social network to the intricate web of genes governing our cells, my colleagues and I decided to rely on network science. We compiled the flavor components of over 300 ingredients and organized them into a network, linking two ingredients if they shared flavor compounds. We then used the collective intelligence accumulated in the existing body of recipes to test what goes with what. If two common ingredients are almost never combined, like garlic and vanilla, there must be a reason for it; those who tried it may have found it either uninspired or outright repulsive. If, however, two ingredients are combined more often than we would expect, based on their individual popularity, we took that as a sign that they must taste well together. Tomato and garlic are in this category, combined in 12 percent of all recipes.*
The truth is rather Dr. Seussian at the end: We may like some combinations here, but not there. That is, North American and Western European cuisine show a strong tendency to combine ingredients that share chemicals. If you are here, serve parmesan with papaya and strawberries with beer. Do not try this there, however: East Asian cuisine thrives by avoiding ingredients that share flavor chemicals. So if you hail from Asia, yin/yang is your guiding force: seeking harmony through pairing the polar opposites. Do you like soy sauce with honey? Try them together and you might.
This Explains Everything Page 23