Fortunately that time has now come for the Higgs mechanism, or at least the simplest implementation, which involves a particle called the Higgs boson. The Large Hadron Collider at CERN, near Geneva, should have a definitive result within the year on whether this particle exists. [Author’s note: Just before this book went to press, the discovery was announced. Now that a Higgs boson has been found, its properties can be measured to verify whether or not it conforms to the simplest expectations or a more elaborate implementation of the Higgs mechanism.] If confirmed, it will demonstrate that the Higgs mechanism is correct and will furthermore tell us what is the underlying structure responsible for spontaneous symmetry-breaking and spreading “charge” throughout the vacuum. The Higgs boson would furthermore be a new type of particle (a fundamental boson, for those versed in physics terminology) and would be in some sense a new type of force. Admittedly, this is all pretty subtle and esoteric. Yet I (and much of the theoretical physics community) find it beautiful, deep, and elegant.
Symmetry is great. But so is symmetry-breaking. Over the years, many aspects of particle physics were first considered ugly and then considered elegant. Subjectivity in science goes beyond communities to individual scientists. And even those scientists change their minds over time. That’s why experiments are critical. As difficult as they are, results are much easier to pin down than the nature of beauty.
THE MIND THINKS IN EMBODIED METAPHORS
SIMONE SCHNALL
Director, Cambridge Embodied Cognition and Emotion Laboratory; University Lecturer, Department of Social and Developmental Psychology, Cambridge, UK
Philosophers and psychologists grappled with a fundamental question for quite some time: How does the brain derive meaning? If thoughts consist of the manipulation of abstract symbols, just as computers process 0s and 1s, then how are such abstract symbols translated into meaningful cognitive representations? This so-called symbol-grounding problem has now been largely overcome, because many findings from cognitive science suggest that the brain does not translate incoming information into abstract symbols in the first place. Instead, sensory and perceptual inputs from everyday experience are taken in their modality-specific form, and they provide the building blocks of thoughts.
British empiricists such as Locke and Berkeley long ago recognized that cognition is inherently perceptual. But following the cognitive revolution in the 1950s, psychology treated the computer as the most appropriate model to study the mind. Now we know that a brain does not work like a computer. Its job is not to store or process information; instead, its job is to drive and control the actions of the brain’s large appendage, the body. A new revolution is taking shape, considered by some to bring an end to cognitivism, and ushering in a transformed kind of cognitive science—namely, an embodied cognitive science.
The basic claim is that the mind thinks in embodied metaphors. Early proponents of this idea were linguists, such as George Lakoff, and in recent years social psychologists have been conducting the relevant experiments, providing compelling evidence. But it does not stop here; there is also a reverse pathway. Because thinking is for doing, many bodily processes feed back into the mind to drive action.
Consider the following recent findings that relate to the basic spatial concept of verticality. Because moving around in space is a common physical experience, concepts such as “up” or “down” are immediately meaningful relative to one’s own body. The concrete experience of verticality serves as a perfect scaffold for comprehending abstract concepts, such as morality: Virtue is up, whereas depravity is down. Good people are “high-minded” and “upstanding” citizens, whereas bad people are “underhanded” and the “low life” of society. Recent research by Brian Meier, Martin Sellbom, and Dustin Wygant illustrated that research participants are faster to categorize moral words when presented in an up location and immoral words when presented in a down location. Thus people intuitively relate the moral domain to verticality; however, Meier and colleagues also found that people who do not recognize moral norms—namely, psychopaths—fail to show this effect.*
People not only think of all things good and moral as up, but they also think of God as up and the Devil as down. Further, those in power are conceptualized as being high up relative to those over whom they hover and exert control, as shown by Thomas Schubert.* All the empirical evidence suggests that there is indeed a conceptual dimension that leads up, both literally and metaphorically. This vertical dimension that pulls the mind up to considering what higher power there might be is deeply rooted in the basic physical experience of verticality.
Verticality not only influences people’s representation of what is good, moral, and divine, but movement through space along the vertical dimension can even change their moral actions. Lawrence Sanna, Edward Chang, Paul Miceli, and Kristjen Lundberg recently demonstrated that manipulating people’s location along the vertical dimension can turn them into more “high-minded” and “upstanding” citizens. They found that people in a shopping mall who had just moved up an escalator were more likely to contribute to a charity donation box than people who had moved down on the escalator. Similarly, research participants who had watched a film depicting a view from high above—namely, flying over clouds seen from an airplane window—subsequently showed more cooperative behavior than participants who had watched a more ordinary, and less “elevating,” view from a car window. Thus being physically elevated induced people to act on “higher” moral values.*
The growing recognition that embodied metaphors provide a common language of the mind has led to fundamentally different ways of studying how people think. For example, under the assumption that the mind functions like a computer, psychologists hoped to figure out how people think by observing how they play chess or memorize lists of random words. From an embodied perspective, it is evident that such scientific attempts were doomed to fail. It is increasingly clear that cognitive operations of any creature, including humans, have to solve certain adaptive challenges of the physical environment. In the process, embodied metaphors are the building blocks of perception, cognition, and action. It doesn’t get much more simple and elegant than that.
METAPHORS ARE IN THE MIND
BENJAMIN K. BERGEN
Associate professor, cognitive science, University of California–San Diego
I study language, and in my field there have been a couple of game-changing explanations over the centuries. One of them explains how languages change over time. Another explains why all languages share certain characteristics. But my favorite is the one that originally got me hooked on language and the mind: It’s an explanation of metaphor.
When you look closely at how we use language, you find that a lot of what we say is metaphorical—we talk about certain things as though they were other things. We describe political campaigns as horse races: “Senator Jones has pulled ahead.” Morality is cleanliness: “That was a dirty trick.” And understanding is seeing: “New finding illuminates the structure of the universe.”
People have known about metaphor for a long time. Until the end of the 20th century, almost everyone agreed on one particular explanation, neatly articulated by Aristotle. Metaphor was seen as a strictly linguistic device—a kind of catchy turn of phrase—in which you call one thing by the name of another thing it’s similar to. This is probably the definition of metaphor you learned in high school English. According to this view, you can metaphorically say that “Juliet is the sun” if, and only if, Juliet and the sun are similar—for instance, if they are both particularly luminous.
But in their 1980 book Metaphors We Live By, George Lakoff and Mark Johnson proposed an explanation for metaphorical language that flouted this received wisdom. They reasoned that if metaphor is just a free-floating linguistic device based on similarity, then you should be able to metaphorically describe anything in terms of anything else it’s similar to. But Lakoff and Johnson observed that real metaphorical language, as actually used, isn’t haphazard at all
. Instead, it’s systematic and coherent.
It’s systematic in that you don’t just metaphorically describe anything as anything else. Instead, it’s mostly abstract things that you describe in terms of concrete things. Morality is more abstract than cleanliness. Understanding is more abstract than seeing. And you can’t reverse the metaphors. While you can say “He’s clean” to mean he has no criminal record, you can’t say “He’s moral” to mean that he bathed recently. Metaphor is unidirectional.
Metaphorical expressions are also coherent with one another. Take the example of understanding and seeing. There are lots of relevant metaphorical expressions: for example, “I see what you mean” and “Let’s shed some light on the issue” and “Put his idea under a microscope and see if it actually makes sense.” And so on. While these are totally different metaphorical expressions—they use completely different words—they all coherently cast certain aspects of understanding in terms of specific aspects of seeing. You always describe the understander as the seer, the understood idea as the seen object, the act of understanding as seeing, the understandability of the idea as the visibility of the object, and so on. In other words, the aspects of seeing that you use to talk about aspects of understanding stand in a fixed mapping to one another.
These observations led Lakoff and Johnson to propose that there was something going on with metaphor that was deeper than just the words. They argued that the metaphorical expressions in language are really only surface phenomena, organized and generated by mappings in people’s minds. For them, the reason metaphorical language exists and is systematic and coherent is that people think metaphorically. You don’t just talk about understanding as seeing; you think about understanding as seeing. You don’t just talk about morality as cleanliness; you think about morality as cleanliness. And it’s because you think metaphorically—because you systematically map certain concepts onto others in your mind—that you speak metaphorically. The metaphorical expressions are merely (so to speak) the tip of the iceberg.
As explanations go, this one covers all the bases. It’s elegant in that it explains messy and complicated phenomena in terms of something much simpler—a structured mapping between two conceptual domains in the mind. It’s powerful in that it explains things other than metaphorical language: Recent work in cognitive psychology shows that people think metaphorically, even in the absence of metaphorical language (affection as warmth, morality as cleanliness). The conceptual-metaphor explanation suggests that we understand abstract concepts like affection or morality by metaphorically mapping them onto more concrete concepts. In terms of utility, the conceptual-metaphor explanation has generated extensive research in a variety of fields; linguists have documented the richness of metaphorical language and explored its diversity across the globe, psychologists have tested its predictions in human behavior, and neuroscientists have searched the brain for its physical underpinnings. And finally, the conceptual metaphor explanation is transformative—it does away with the accepted idea that metaphor is just a linguistic device based on similarity. In an instant, it made us rethink more than 2,000 years of received wisdom. This isn’t to say that the conceptual-metaphor explanation doesn’t have its weaknesses or that it’s the final word in the study of metaphor. But it’s an explanation that casts a huge shadow. So to speak.
THE PIGEONHOLE PRINCIPLE
JON KLEINBERG
Tisch University Professor of computer science, Cornell University; coauthor (with David Easley), Networks, Crowds, and Markets: Reasoning About a Highly Connected World
Certain facts in mathematics feel as though they contain a kind of compressed power—they look innocuous and mild-mannered when you first meet them, but they’re dazzling when you see them in action. One of the most compelling examples is the pigeonhole principle.
Here’s what the pigeonhole principle says. Suppose a flock of pigeons lands in a group of trees and there are more pigeons than trees. Then after all the pigeons have landed, at least one of the trees contains more than one pigeon.
This fact sounds obvious, and it is: There are simply too many pigeons, so they can’t each get their own tree. Indeed, if this were the end of the story, it wouldn’t be clear why this is a fact that deserves to be named or noted. But to appreciate the pigeonhole principle, you have to see some of the things you can do with it.
So let’s move on to a fact that doesn’t look nearly as straightforward. The statement itself is intriguing, but what’s more intriguing is the effortless way it will follow from the pigeonhole principle. Here’s the fact: Sometime in the past 4,000 years there have been two people in your family tree—call them A and B—with the property that A was an ancestor of B’s mother and also an ancestor of B’s father. Your family tree has a loop, where two branches growing upward from B come back together at A—in other words, there’s a set of parents in your ancestry who are blood relatives of each other, thanks to this relatively recent shared ancestor A.
It’s worth mentioning a couple of things here. First, the “you” in the previous paragraph is genuinely you, the reader. Indeed, one of the interesting features of this fact is that I can make such assertions about you and your ancestors despite not even knowing who you are. Second, the statement doesn’t rely on any assumptions about the evolution of the human race or the geographic sweep of human history. Here, in particular, are the only assumptions I’ll need:
Everyone has two biological parents.
No one has children after the age of a hundred.
The human race is at least 4,000 years old.
At most, a trillion human beings have lived in the past 4,000 years. (Scientists’ actual best estimate for (4) is that roughly 100 billion human beings have ever lived in all of human history; I’m bumping this up to a trillion just to be safe.)
All four assumptions are designed to be as uncontroversial as possible; and even then, a few exceptions to the first two assumptions and an even larger estimate in the fourth would only necessitate some minor tweaking to the argument.
Now, back to you and your ancestors. Let’s start by building your family tree going back 40 generations: you, your parents, their parents, and so on, 40 steps back. Since each generation lasts, at most, 100 years, the last 40 generations of your family tree all take place within the past 4,000 years. (In fact, they almost surely take place within just the past 1,000 or 1,200 years, but remember that we’re trying to be uncontroversial.)
We can view a drawing of your family tree as a kind of “org chart,” listing a set of jobs or roles that need to be filled by people. That is, someone needs to be your mother, someone needs to be your father, someone needs to be your mother’s father, and so forth, going back up the tree. We’ll call each of these an “ancestor role”—it’s a job that exists in your ancestry, and we can talk about this job regardless of who actually filled it. The first generation back in your family tree contains two ancestor roles, for your two parents. The second contains four ancestor roles, for your grandparents; the third contains eight roles, for your great-grandparents. Each level you go back doubles the number of ancestor roles that need to be filled, so if you work out the arithmetic, you’ll find that forty generations in the past you have more than a trillion ancestor roles that need to be filled.
At this point, it’s time for the pigeonhole principle to make its appearance. The most recent 40 generations of your family tree all took place within the past 4,000 years, and we decided that, at most, a trillion people lived during this time. So there are more ancestor roles (over a trillion) than there are people to fill these roles (at most a trillion). This brings us to the crucial point: At least two roles in your ancestry must have been filled by the same person. Let’s call this person A.
Now that we’ve identified A, we’re basically done. Starting from two different roles that A filled in your ancestry, let’s walk back down the family tree toward you. These two walks downward from A have to first meet each other at some ancestor role lower down in t
he tree, filled by a person B. Since the two walks are meeting for the first time at B, one walk arrived via B’s mother and the other arrived via B’s father. In other words, A is an ancestor of B’s mother and also an ancestor of B’s father, just as we wanted to conclude.
Once you step back and absorb how the argument works, you can appreciate a few things. First, in a way it’s more a fact about simple mathematical structures than it is about people. We’re taking a giant family tree—yours—and trying to stuff it into the past 4,000 years of human history. It’s too big to fit, so certain people have to occupy more than one position in it.
Second, the argument has what mathematicians like to call a nonconstructive aspect. It never really gave you a recipe for finding A and B in your family tree; it convinced you that they must be there, but very little more.
And finally, I like to think of it as a typical episode in the lives of the pigeonhole principle and all the other quietly powerful statements that dot the mathematical landscape—a band of understated little facts that seem frequently to show up at just the right time and, without any visible effort, clean up an otherwise messy situation.
WHY PROGRAMS HAVE BUGS
MARTI HEARST
This Explains Everything Page 19