Again, gesturing helped people think. Those who had gestured answered more questions about the environments correctly than those who hadn’t. And they answered faster. Those who gestured made more accurate inferences; they were better at answering questions from perspectives they hadn’t read. Several people gestured for some but not all descriptions; they performed better on the descriptions they had gestured. To cinch the case for gesture, we asked another group of students to read and remember the descriptions while sitting on their hands. Sure enough, those who sat on their hands performed worse than the group allowed to gesture.
The environments were rich and complex, as were the gestures. Most people produced a long string of gestures, sometimes revising as they worked out their understanding. They rarely looked at their hands, and when they did, it was a brief glance. That means that the gestural representations were spatial-motor, not visual. Given that, it makes much more sense that people blind from birth gesture. What matters are the movements in space, not what they look like.
Surprisingly, gesturing while reading didn’t slow reading, even though people were doing two things at the same time. Doing two things simultaneously is supposed to increase cognitive load and lower performance. Not so for gesturing and thinking. Paradoxically, adding to the cognitive load reduced the cognitive load.
Understanding the explanations was hard; it took effort to figure out where everything was. Words march one after another in horizontal rows; they bear only symbolic relationship to the environments. But the gestures resemble the environments, they put the places and paths in a virtual map step-by-step. In essence, the gestures translated the language into thought.
Will gesturing facilitate any kind of thinking? Our guess is that gesturing can help thinking that is complicated and that can be spatialized. Research on understanding elementary actions in physics and mechanics supports those ideas. A string of gears works because adjacent pairs of gears go in opposite directions: a gear that rotates clockwise is surrounded on both sides by gears that rotate counterclockwise. This is called the parity rule. Gesturing helps people grasp the parity rule, that in a chain of gears, each successive gear reverses the direction of rotation.
Gesturing helps people understand the water level problem, that when a glass is tilted, the water level stays parallel to the ground, it doesn’t tilt with the glass. Imagining tilting the glass didn’t help understanding, but tilting the hand as if grasping a glass did. This is a crucial, if puzzling, distinction. Imagination, that is, visual-spatial reasoning, was not as effective in understanding that the level of water in a glass stays parallel to the ground even when the glass is tilted as was making a tilting action.
Rotating the hand in the right direction also helps some people solve mental rotation problems.
Our own work is venturing farther, beyond the inherently spatial. We have given students descriptions of all sorts of things to remember and reason from: party planners’ schedules, people’s preferences for film genres, orderings of countries by economic growth, explanations of how a car brake or a bicycle pump work, multiplication of two 3-digit numbers, and more. In each case, about two-thirds to three-quarters of participants gestured as they read, and their gestures formed virtual diagrams of the problems. The formats of their virtual diagrams varied widely, but the essence of the information represented did not. In all cases, gesturing while studying speeded answering questions at test, indicating that the gesturing consolidated the information. For the mechanical systems, the car brake and bicycle pump, gesturing at study improved performance on the tests as well. We’ve also found that people gesture when given diagrams rather than descriptions of the mechanical systems and maps of the environments. That is, even when provided with visualizations, many people use gestures to make spatial-motor models of the systems and environments they are trying to learn.
Watching people’s hands as they read and understand feels like watching their thinking. Better than peering into the brain, it’s all out there before the eyes. Some of our students used the joints of their fingers as the rows and columns of a table for representing preferences or schedules. Others made virtual tables on the table. The gestures that represented the mechanical systems, the car brake and bicycle pump, were remarkably creative and diverse (just as were people’s diagrams, as shall be seen). Despite the diversity, the gestures (and the diagrams) abstracted the underlying structure and dynamics of the systems. As before, we required half the participants to sit on their hands. Remarkably, almost a third of those asked to sit on their hands couldn’t comply; they could not stop gesturing! It was as if they couldn’t think if they couldn’t move their hands. Some told us exactly that.
How curious and surprising that we think with our hands. But gesturing is no panacea. It does not guarantee success. Telling people to gesture doesn’t necessarily improve performance. The gesturing has to be part and parcel of the thinking, to represent the thought. And the thought has to be correct. If the thinking goes astray, so do the gestures and so does the correct solution. Another problem we gave students illustrates this nicely. Try it yourself: A ship is moored in a harbor. A rope ladder with 10 rungs hangs over its side. The distance between each rung is 12 inches. The lowest rung touches the water. Because of the incoming tide, the surface of the water rises 4 inches per hour. How soon will the water cover the third rung from the top of the ladder?
This problem seems like one of those rate X time problems we struggled with in junior high. But it’s not. It’s a trick, but most of our very bright undergraduates fell for it. A majority of students gestured while trying to solve this problem. Typically, they used one hand to keep track of the rungs of the ladder and the other to calculate. Those who gestured succeeded in computing the wrong answer more accurately, that is, the time at which the water would rise to the third rung from the top—if the boat were attached to the floor of the sea. But the boat floats! So the level of the water relative to the ladder doesn’t change as the tide comes in. The answer to When will the water cover the third rung from the top? is: Never. Realizing that the boat floats doesn’t require gesturing. That’s a fact that has to be drawn from memory. So, in this case, those who gestured were more likely to solve the problem incorrectly because their gestures were driven by incorrect thinking.
To be effective, gestures need to represent the thought in the right way. If gestures that are congruent with thought augment thought, then it should be possible to design gestures that can help people comprehend, learn, think, and solve problems. One such gesture is routinely used in teaching physics. Students are taught to form three axes by holding their thumb and two adjacent fingers at right angles and to rotate them to solve vector problems. In school settings, children were taught a gesture designed to help them understand that the two sides of an equation are equal. Children made a V gesture with their index and middle fingers, each pointing to a side of the equation. Children taught that gesture showed greater understanding of the underlying principle of equality.
Touch pads provide an excellent opportunity to induce students to make gestures that are congruent with the desired thinking. For example, addition is a discrete task, each number gets a count. By contrast, number line estimation is a continuous task. In a number line estimation task, people are presented with a horizontal line representing the numbers from 1 to 100. They are given a number, say 27 or 66, and asked to mark where that number would be on the number line. Children performed better when the addition task was paired with discrete one-to-one gestures and when the number line estimation task was paired with a continuous gesture.
HOW REPRESENTATIONAL GESTURES WORK
We’ve shown that the gestures people spontaneously make for themselves can help them think. That gestures embody thought. That they map thought directly. They represent thought, not in words or symbols but as actions in space. This is the mysterious part. It’s not just motor memory, the kinds of gestures that dancers or pianists or surgeons or tennis players or typist
s might make to jog their memories. Those gestures are miniatures of the actual actions they would make. Making a map of an environment with hands and fingers isn’t at all like walking through an environment. The hands and fingers are used to represent the environment. The mappings are abstractions. When we walk through environments, we walk on paths. We can think of the paths as lines and then we can represent lines by moving a finger, moving a hand, or making a discrete chop with a finger or a hand or an arm. We can abstract places to dots and make dots in a variety of ways. Similarly, we can think of each movie genre as a dot, and we can represent our preferences for genres by ordering the dots on a line. We can use that same mapping to represent events in time; each event is a dot ordered on a line. Maps of environments, preferences for movie genres, events ordered in time—and much more—all use the same representational primitives. They use dots to represent places or ideas and use lines to represent relations between them. There’s more, circles and boxes, and even more. We’ll return to this when we get to graphics in Chapter Eight. The same sorts of mappings are used on the page.
The gestures people make as they think have another boon: they allow seeing thought in action. Others can watch our thinking and we theirs. In real time, as it happens. Can the kinds of gestures that serve our own thought also serve the thought of others? We turn to that now.
GESTURES CHANGE THE THOUGHTS OF OTHERS
We start with babies again. Babies whose caretakers use gesture and speech simultaneously (rather than unaccompanied speech) acquire vocabulary faster. It could be that gestures like pointing clarify the referents of the speech. It could be that gestures enact or depict the referents of the speech. It is probably both and more. When babies see more gestures, they gesture more themselves, providing, as we saw earlier, yet another route for increasing vocabulary.
Parents are so proud when their toddlers can count. But then they are baffled. Despite getting all the number words in the right order, their young prodigies can’t answer: How many? What counting means to the toddlers is matching a sequence of words to a series of points to objects. It’s rote learning like the alphabet song, with the addition of a marching pointing finger. It isn’t yet about number as we understand number. Don’t get me wrong, this is a remarkable achievement. That they can do one-to-one correspondence, one number for each object irrespective of the object and increasing numbers, at that, is impressive. Other primates don’t do that. But one-to-one correspondence is only part of the picture. When they can’t answer how many, they don’t yet understand cardinality, that the last number word, the highest number, is the total count for the set. If you show them a picture of two sets, say Jonah’s candy and Sarah’s candy, and ask them to tell how many pieces of candy each child has, they often count Jonah’s and without stopping, continue on to count Sarah’s. Gesturing a circle around each set of candy helps them to count each set separately, an important step toward understanding cardinality. The circular gesture creates a boundary around each set, including the candy in Sarah’s set and separating hers from Jonah’s. Children are more likely to stop counting at the boundary.
Now we jump to bigger people. When we explain something to someone else, we typically gesture. Those gestures are usually larger than the gestures we make for ourselves, there are more of them, and they work together to form a narrative that parallels the spoken narrative. If speakers make larger gestures for others and link them in a narrative, then it’s likely they think the gestures help their listeners. We certainly depend on gestures when someone tells us which way to go or how to do something. But that kind of gesture depicts actions we are supposed to take in the world. What about gestures that are meant to change thought, to form representations in the mind?
For this, we turned to concepts that people of all ages and occupations need to learn and that are difficult. Complex systems. The branches of government, what each does, how laws are passed, how they are challenged in courts. How elections proceed, how babies are made, how the heart works. Shakespeare’s plays, the main figures, their social and political relations, what each did and how others reacted. Diverse as they are, underneath each is a complex system with a structural layer and a dynamic layer. Structure is an arrangement of parts. Dynamics is a causal sequence of actions. Structure is space; dynamics, time.
Dozens of studies have shown that it’s easier to grasp structure than dynamics. Structure is static. Dynamics is change, often causality. Novices and the half of us low in spatial ability understand structure, but it takes expertise or ability or effort to understand dynamics. Structure can readily be put on a page. A map of a city. A diagram of the branches of government, the parts of a flower, a family tree. Networks of all kinds. Action doesn’t stay still, it’s harder to capture and harder to show. The actions are diverse and the causality is varied and might not be visible, forces and wind.
Gestures are actions; could gestures that represent actions help people understand dynamics? For a dynamic system, we chose the workings of a car engine. We wrote a script that explained its structure and action, everything that would be needed to answer the questions we asked later. Then we made two videos of the same person using the same script to explain the car engine. One video had eleven gestures showing structure, such as the shape of the pistons. Another had eleven gestures showing action, say, of the piston. The same rudimentary diagram appeared in both videos. A large group of students watched one or the other of the videos. Because structure is easy, we didn’t expect effects of structure gestures, but it was important that both groups of viewers see gestures.
After viewing the explanation of the car engine, participants answered a set of questions, half on structure, half on action. Then they created visual explanations of the car engine. Finally, they explained the workings of the car engine to a video camera so that someone else could understand. Viewing action gestures had far-reaching consequences. People who had viewed action gestures answered more action questions correctly, even though all the information was in the script. The differences in the visual and videoed explanations were more dramatic. Those who had seen action gestures showed far more action in their visualizations: they used more arrows, they depicted actions like explosions, intake, and compression. They separated the steps of the process more cleanly. In their videoed explanations, they used far more action gestures and most of those were inventions, not imitations. They used more action words, even though they hadn’t heard more action words. Viewing straightforward and natural gestures conveying action gave students a far deeper understanding of action, an understanding revealed in their knowledge, in their diagrams, in their gestures, and in their words.
Put simply, gestures change thought. Gestures that we make as well as gestures that we see. Next, we turned to concepts of time, using the same technique: identical script, different gestures for different participants. Perhaps because words come one after another, people can have trouble grasping that two steps or events aren’t strictly ordered in time. They may be simultaneous in actual time or their order might not matter. When the stages of a procedure are described as, first you do M, then you can do P or Q in either order, and finally you do W, people often remember that P precedes Q (or vice versa). When the description of the steps in time was accompanied by a beat gesture for each step, people made the error of strictly ordering the steps. However, when the description came with a gesture indicating simultaneity, unordered steps were remembered correctly, as unordered.
Another temporal concept that doesn’t come easily for people is cyclicity. Think of cycles like the seasons, washing clothes, the rock cycle, and this one: the seed germinates, the flower grows, the flower is pollinated, a new seed is formed. When given the steps of cycles like these and asked to diagram them, people tend to draw linear, but not circular, diagrams. People do understand circular diagrams of cycles perfectly well, but they produce linear ones. Gestures change that. When we presented one of the processes with gestures that proceeded along a line, the linear
tendency strengthened. But when we presented one of the processes with gestures that went in a circle, a majority drew circular diagrams. Importantly, they weren’t simply copying the gestures. We repeated the experiment with another group and instead of asking them to create a diagram after the last stage, we asked them: What comes next? Those who had seen circular diagrams usually went back to the beginning of the cycle and said: the seed germinates. But those who had seen linear gestures tended to continue to a new process, like gathering flowers for a bouquet. So, seeing the circular gestures did change the way people thought.
These studies are only a drop in the bucket of the research showing that the gestures we view change the ways we think. The trick is to create gestures that establish a space of ideas that represents the thought felicitously. That gestures have the power to change thought has powerful implications for communication, in the classroom and outside.
GESTURES DO MATH AND MUSIC
Fingers and toes and other parts of the body have been used for counting all over the world for eons. At first, one finger for one thing, much like a tally. The one-to-one use of fingers and toes is an elegant example of a congruent mapping, one thing to one finger. But the number of things can go far beyond the number of fingers and toes, and even shoulders, knees, and every other joint in the body. People eventually came up with the bright idea of using some joints as multiples of others, so some joints became tens, hundreds, thousands, and so on. That transformation left a one-to-one congruent correspondence far behind. Going even further, the hand itself became the first slide rule or calculator. It took practice, just like using a slide rule does, to become adept at bending and straightening fingers in order to add, multiply, subtract, and divide. Like playing the piano. Pianos also have a congruent mapping, the left-to-right order of the keys to the increasing frequencies of the notes the keys play. Using the hand as a calculator began as spatial congruence and evolved into performance congruence, one that mapped hand actions to arithmetic operations.
Mind in Motion Page 14