There are actually two pairs of homunculi splayed along the central sulcus; one pair maps the sensations from the body, the other pair maps motor output to the body. The pair on the left side of the brain maps the right side of the body and the pair on the right side of the brain maps the left side of the body. The sensory and motor homunculi face each other. The motor homunculus is, perhaps significantly, positioned more forward (technical terms: anterior or frontal), toward the eyes and nose. It controls the output, telling the muscles how to move. The sensory homunculus is positioned toward the back of the head (technical terms: posterior or dorsal, from Latin for “tail”). It brings the input from the many kinds of sensations our bodies respond to, position, pain, pressure, temperature, and more. The homunculi are strange little people, with oversized heads, huge tongues, enormous hands, and skinny torsos and limbs.
FIGURE 1.1. Sensory homunculus.
You can’t help but see that these cortical proportions are far from the proportions of the body. Rather than representing the sizes of the body parts, the sizes of the cortical representations of the various body parts are proportional to the quantities of neurons ascending to them or descending from them. That is, the head and hands have more cortical neurons relative to their body size, and the torso and limbs have fewer cortical neurons relative to their body size. More neural connections mean more sensory sensitivity on the sensory side and more action articulation on the action side. The disproportionate sizes of cortical real estate make perfect sense once we think about the multitude of articulated actions that the face, tongue, and hands must perform and the sensory feedback needed to modulate their actions. Our tongues are involved in the intricate coordinated actions necessary for eating, sucking, and swallowing, for speaking, groaning, and singing, and for many other activities that I will leave to your imagination. Our mouths smile and frown and scowl, they blow bubbles and whistle and kiss. Hands type and play the piano, throw balls and catch them, weave and knit, tickle babies and pat puppies. Our toes, on the other hand, are sadly underused, incompetent, and unnoticed—until we stub them. That functional significance trounces size is deep inside us, or rather, right there at the top of the head.
Significance trounces size not only in the brain but also in talk and thought. We saw this in research in our laboratory. We first collected the body parts most frequently named across languages. Zipf’s Law tells us that the more a term gets used, the shorter it gets; co-op, TV, and NBA are examples. The presumption is that if a body part is named across languages, it’s probably important irrespective of culture. The top seven were head, hands, feet, arms, legs, front, back. All the names are short, and, in fact, all are important even compared to other useful parts, like elbow or forearm. We asked a large group of students to rank those parts by significance and another group by size. As expected, similar to the homunculus in the brain, significance and size didn’t always line up. Significance reflected size of cortical territory, not body size: head and hands were rated as highly significant but aren’t particularly large, and backs and legs are large but were rated lower in significance.
Next, we asked which body parts were faster for people to recognize, the large ones or the significant ones? We tried it two ways. In one study, people saw pairs of pictures of bodies, each in a different pose, each with a part highlighted. You might be thinking that people would naturally find large parts faster. To make all parts equal irrespective of size, we highlighted with a dot in the middle of the part. In the other study, people first saw a name of a body part and then a picture of a body with a part highlighted. In both studies, half the pairs had the same part highlighted and half had different parts highlighted. Participants were asked to indicate “same” or “different” as fast as possible. An easy task; there were very few errors. Our interest was in the time to respond: Would people respond faster for significant parts or for large ones? You’ve probably already guessed what happened. Significant parts were faster.
The triumph of significance over size was even stronger for name-body comparisons than for body-body comparisons. Names are a string of letters; they lack the concrete features of pictures like size and shape. Names, then, are more abstract than depictions. Similarly, associations to names of objects are more abstract than associations to depictions of objects. Names of things evoke abstract features like function and significance, whereas pictures of things evoke concrete perceptible features.
First General Fact Worth Remembering: Associations to names are more abstract than associations to pictures.
Remember that all the parts used in our studies were significant compared to familiar but less significant parts like shoulder or ankle. Notably, the word for each part—head, hands, feet, arms, legs, front, and back—has numerous extended uses, uses so common that we’re unaware of their bodily origins. Here are just a few: head of the nation, lost his head; right-hand person, on the one hand, hands down; foot of the mountains, all feet; arm of a chair, arm of the government; the idea doesn’t have legs, shake a leg, break a leg; up front, front organization; not enough backing, behind the back. Notice that some of these figurative meanings play on the appearance of the parts, elongated as in the arms and legs of a chair; others play on the functions of the parts, such as the head of the nation and the idea has no legs. Of course, many other body parts have figurative extensions: someone might be the butt of a joke or have their fingers into everything. Then there are all the places claiming to be the navel of the world—visiting all of them could keep you traveling for months—the navel, that odd dot on our bellies, a remnant of the lifeline that once connected us to our mothers. Once you start noticing figurative uses, you see and hear them everywhere.
Like our knowledge of space, we know about our bodies from a multitude of senses. We can see our own bodies as well as those of others. We can hear our footsteps and our hands clapping and our joints clicking and our mouths speaking. We sense temperature and texture and pressure and pleasure and pain and the positions of our limbs both from the surface of our skin and from proprioception, those sensations of our bodies from the inside. We know where our arms and legs are without looking, we can feel when we are off balance or about to be. It’s mind-boggling to think of how much delicate and precise coordination of so many sensory systems is needed just to stand and walk, not to mention shoot a basket or do a cartwheel. We weren’t born doing those things.
Babies have so much to learn. And they learn so fast: their brains create millions of synapses, connections between neurons, per second. But their brains also prune synapses. Otherwise, our brains would become tangled messes, everything connected to everything else, a multitude of possibilities but no focused action, no way to strengthen important connections and weaken irrelevant ones, no way to choose among all those possibilities and organize resources to act. Among other things, pruning allows us to quickly recognize objects in the world and to quickly catch falling teacups but not burning matches. But that process has costs: we can mistake a coyote for a dog and a heavy rock for a rubber ball.
This brings us to our First Law of Cognition: There are no benefits without costs. Searching through many possibilities to find the best can be time consuming and exhausting. Typically, we simply don’t have enough time or energy to search and consider all the possibilities. Is it a friend or a stranger? Is it a dog or a coyote? We need to quickly extend our hands when a ball is tossed to us but quickly duck when a rock is hurled at us. Life, if nothing else, is a series of trade-offs. The trade-off here is between considering possibilities and acting effectively and efficiently. Like all laws in psychology, this one is an oversimplification, and the small print has the usual caveats. Nevertheless, this law is so fundamental that we will return to it again and again.
INTEGRATING BODIES: ACTION AND SENSATION
With this in mind, watching five-month-old babies is all the more mystifying. On their backs, as they are now supposed to be placed, they can suddenly catch sight of their hand and are captivate
d. They stare intently at their hand as though it were the most interesting thing in the world. They don’t seem to understand that what they are regarding so attentively is their own hand. They might move their hand quite unintentionally and then watch the movement without realizing that they’ve caused it. If you put your finger or a rattle in their hand, they’ll grasp it; grasping is reflexive. But if the hand and the rattle disappear from sight, they won’t track them. Gradually, sight and sensation and action get integrated, starting at the top of the body, hands first. Weeks later, after they’ve accomplished reaching and grasping with their hands, they might accidentally catch their foot. Flexible little things with stubby legs, they might then bring their foot to their mouth. Putting whatever’s in the hand into the mouth is also quite automatic, but at first they don’t seem to realize that it’s their own foot.
Babies start disconnected. They don’t link what they see with what they do and what they feel. And they don’t link the parts of their body with each other. We take the connections between what we see and what we feel for granted, but human babies don’t enter the world with those connections; the connections are learned, slowly over many months. Ultimately, what unites the senses foremost is action. That is, the output—action—informs and integrates the input—sensation—through a feedback loop. Unifying the senses depends on acting: doing and seeing and feeling, sensing the feedback from the doing at the same time.
It’s not just babies who calibrate perception through action. We adults do it too. Experiments in which people don prismatic glasses that distort the world by turning it upside down or sliding it sideways show this dramatically. The first known experiments showing adaptation to distorting lenses were performed in the late nineteenth century by George Stratton, then a graduate student and later the founder of the Berkeley Psychology Department. Stratton fashioned lenses that distorted vision in several ways and tried them himself, wearing them for weeks. At first, Stratton was dizzy, nauseated, and clumsy, but gradually he adapted. After a week, the upside-down world seemed normal and so was his behavior. In fact, when he removed the lenses, he got dizzy and stumbled again. Since then, experiments with prismatic lenses that turn the world every which way have been repeated many times. You can try the lenses in many science museums or buy them on the Web. A charismatic introductory psychology teacher at Stanford used to bring a star football player to class and hand him distorting lenses. Then the instructor would toss the player a football, and of course the star player fumbled, much to everyone’s delight. A rather convincing demonstration! That disrupted behavior, the errors in reaching or walking, is the measure of adaptation to the prismatic world.
The surprising finding is this: seeing in the absence of acting doesn’t change perception. If people are wheeled about in a chair and handed what they need—if they don’t walk or reach for objects—they do not adapt to the prismatic lenses. Then, when the lenses are removed, the behavior of passive sitters is normal. No fumbling. No dizziness.
Because acting changes perception, it should not be surprising that acting changes the brain. This has been shown many times in many ways, in monkeys as well as in humans. Here’s the basic paradigm: give an animal or a person extensive experience using a tool. Then check areas of the brain that underlie perception of the body to see if they now extend outside the body to include the tool. Monkeys, for example, can quickly learn to use a hand rake to pull out-of-reach objects, especially treats, to themselves. After they become adept at using a rake, the brain regions that keep track of the area around the hand as it moves expand to include the rake as well as the hand. These findings were so exciting that they have been replicated many times in many variations in many species. The general finding is that extensive practice using tools enlarges both our conscious body image and our largely unconscious body schema.
That extensive tool use enlarges our body images to include the tools provides evidence for the claim that many of us jokingly make, that our cell phones or computers are parts of our bodies. But it also makes you wish that the people who turn and whack you with their backpacks had had enough experience with backpacks that their backpacks had become part of their body schemas. Too bad we don’t use our backpacks the ways we use the tools in our hands.
The evidence on action is sufficient to declare the Second Law of Cognition: Action molds perception. There are those who go farther and declare that perception is for action. Yes, perception serves action, but perception serves so much more. There are the pure pleasures of seeing and hugging people we love, listening to music we enjoy, viewing art that elevates us. There are the meanings we attach to what we feel and see and hear, the sight of a forgotten toy or the sound of a grandparent’s voice or the taste, for Proust, of a madeleine. Suffice it to say that action molds perception.
Earlier I observed that our skin surrounds and encloses our bodies, separating our bodies from the rest of the world. It turns out that it’s not quite that simple (never forget my caveats and my caveats about caveats). It turns out that we can rather easily be tricked into thinking that a rubber hand—yuck—is our own.
In a paradigmatic experiment, participants were seated at a table, with their left arm under the table, out of view. On the table was a very humanlike rubber hand positioned like the participant’s real arm. Participants watched as the experimenter gently stroked the rubber arm with a fine paintbrush. In synchrony, the experimenter stroked the participant’s real but not visible arm with an equivalent brush, matching the rhythm. Amazingly, most participants began to think that the arm they could see, the rubber arm, was their own. They reported that what they saw was what they felt. Action, per se, is not involved in creating this illusion, but proprioceptive feedback seems to be crucial. Both hands, the participant’s real hand and the rubber hand, are immobile. What seems to underlie the illusion is sensory integration, the integration of simultaneously seeing and feeling.
If people perceive the rubber arm as their own arm, then if they watch a threat to the rubber arm, they should get alarmed. This happened in subsequent experiments. First, as before, participants experienced enough synchronous stroking of their hidden real arm and the visible rubber arm to claim ownership of the rubber arm. Then the experimenters threatened the rubber arm by initiating an attack on the arm with a sharp needle. At the same time, they measured activation in areas of the brain known to respond to anticipated pain, empathetic pain, and anxiety. The more participants reported ownership of the rubber hand, the greater the activation in the brain regions underlying anticipated pain (left insula, left anterior cingulate cortex) during the threatened, but aborted, attack with a sharp needle.
The rubber hand phenomenon provides yet another explanation of why people’s body schemas enlarge to include tools but don’t seem to enlarge to include their backpacks. Ownership of a rubber hand depends on simultaneous seeing and sensing, seeing the rubber hand stroked and sensing simultaneous stroking on the real hand. We can’t see our backpacks and whatever sensations we have are pressure or weight on our backs and shoulders, which give no clue to the width of the backpack generating the pressure.
UNDERSTANDING OTHERS’ BODIES
Now to the bodies of others. It turns out that our perception and understanding of the bodies of others are deeply connected to the actions and sensations of our own bodies. What’s more, the connection of our bodies to those of others is mediated by the very structure of the brain and the nervous system. Let’s begin again with babies, let’s say, one-year-olds. Babies that young have begun to understand the goals and intentions of the actions of others, at least for simple actions like reaching. You might wonder how we know what babies are thinking. After all, they can’t tell us (not that what we say we are thinking is necessarily reliable). We know what babies are thinking the same way we often know what adults are thinking: from what they are looking at. Sometimes actions can be more revealing than words.
The most common way researchers infer the thoughts of babies is throug
h a paradigm known as habituation of looking. Two ideas underlie this paradigm: people, even, or especially, babies, look at what they’re thinking about; and second, stuff that’s new grabs attention and thought. In a typical task, researchers show infants a stimulus or an event, in this case, a video of someone reaching for an object. At the same time, they monitor how much the infants are looking at the event. They show the event again, and monitor again. The researchers show the stimulus or the event over and over until the baby loses interest and looks away, that is, until the infant habituates to the event. After the infant habituates, the researchers show a new event that alters the previous one in one of two ways. They change the goal of the action by switching the object of reaching or they switch the means of attaining the goal by changing the manner of reaching. The question of interest is whether infants will look more at the event where the goal of reaching was changed or the event where the means of attaining the goal was changed.
If the infant understands that it’s the goal that matters, not the means to the goal, the infant should look more when the goal changes than when the means changes. At ten months, infants were indifferent to the changes; they looked equally at both. Both events were new, and the infants didn’t regard a change of goal as more interesting than a change of manner of attaining the goal. That changed in only two months. Twelve-month-old infants looked more when the goal changed than when the means to the goal changed. A leap of understanding of goal-directed behavior in two months.
More support for the notion that one-year-olds understand action-goal couplings comes from tracking their eye movements as they watch someone reaching. Remarkably, the eye movements of one-year-old infants jump to the goal of the action before the hand even reaches the goal, suggesting that they anticipate the goal.
Mind in Motion Page 2