Teaching What Really Happened: How to Avoid the Tyranny of Textbooks and Get Students Excited About Doing History

Home > Other > Teaching What Really Happened: How to Avoid the Tyranny of Textbooks and Get Students Excited About Doing History > Page 11
Teaching What Really Happened: How to Avoid the Tyranny of Textbooks and Get Students Excited About Doing History Page 11

by James W. Loewen


  “In every child who is born, under no matter what circumstances, and of no matter what parents,” wrote James Agee, “the potentiality of the human race is born again.”40 A teacher’s job is to help that child realize a portion of that potential today. The job includes that same task tomorrow. To keep expectations high, day after day—even when students do not respond—is truly a challenge. Some students are less able—partly because they believe they are. A teacher cannot reach everyone, but history teachers can come close. Owing to the multifaceted nature of its subject matter and the diverse ways it can be taught, history can bring out some worthwhile performance from almost every student. “You say you’re ‘just not good at history.’ I don’t know you very well yet, so I’m not going to challenge that statement. I just want you to change the verb tense, and just for the rest of the fall semester. Say ‘Up until now, I just wasn’t good at history.’ OK?”

  On that high note, the chapter should end. First, we must tie up two loose ends. Earlier, we saw that the performance gap between have and have-not students is larger in history than in any other subject. We also noted that differential teacher expectations—strong as they are—explain only part of that gap. Something else is also going on: a special alienation of have-not students. Much of American history is taught to justify, not merely analyze, our nation’s past. The biased and inaccurate account of Reconstruction learned by my Tougaloo students, described in the introduction to this book, is merely a particularly flagrant example. So is the counterfactual “$24 myth” discussed in Chapter 7. Faced with material like this, students who do not hail from the establishment often develop a special alienation from history. For example, they may not be articulate enough to describe how the way in which their textbook presents the past implies that the nation is just, in turn implying that their family’s lack of status must be its own fault. They only know they “don’t like” history.

  Don’t let this happen in your class! History can be a tool of liberation or oppression. Surely, learning the truth about the past is a tool for liberation. Teachers need to look within and ensure that their course and their thinking about the U.S. does not gloss over our misdeeds or imply that Americans did them with good intentions. Western civilization will not come to an end if our students face the past without flinching. On the contrary, our nation can only benefit from its newly informed citizenry.

  The second loose end concerns the rising number of “standardized” tests in history that students—and by implication their teachers—face. Under pressure from the federal No Child Left Behind Act (NCLB), sometimes derided as “No Child Left Untested,” many states now rely on multiple-choice exams to test students’ knowledge and skills in history. There is nothing wrong with an increased emphasis on accountability. Teachers should be accountable. The problem lies in how we measure student progress.

  Legislators and state administrators do not adopt multiple-choice tests out of a belief that such exams measure skills that children need. Everyone knows that after students leave school, most will never have to answer another multiple-choice item again. In the workplace, we may be asked to write, to speak, to read critically and summarize, to understand tables. In our job as citizens, we need to be able to do all those things and also listen critically, develop ideas, and persuade others of our point of view. In adult life, we are not asked to choose a letter:

  The War of 1812 began in

  a. 1811

  b. 1812

  c. 1815

  d. It never really began because Great Britain backed down

  e. All of the above41

  To state officials, multiple-choice tests come with a gloss of fairness. They seem defensible and uncontroversial because they look objective. They claim to test factual knowledge because they come with “right” or “wrong” answers.

  This chapter has shown what is not fair—and cannot be fair—about “standardized” tests. The claim that multiple-choice items are defensible and uncontroversial unravels when we consider the “right” answer to this fill-in-the-blank item: “Columbus discovered America in ____.” A further unraveling comes when we recall the problems with “twig tests” discussed in the previous chapter. Although multiple-choice items can be constructed that test subtle matters of understanding, such questions are rare. Most multiple-choice tests ask for mere recall. Teachers will teach to the tests. We even want them to. If the test is multiple choice, teachers will teach twigs, students will be bored, and we will return to square one, with U.S. history being the least-liked subject in the curriculum.

  Meanwhile, students who have done powerful work in a course in U.S. history for the first time—partly because they were expected to, and partly because some of the varied teaching methods resonated with their strengths—are unlikely to do well on these statewide exams. Multiple-choice twig tests are precisely the barriers that have been defeating these students for years.

  In the real world, twig tests get adopted because they are cheap. Indeed, developing and grading them can be cash cows for their manufacturers. Despite their economy, however, end-of-the-year exams should not be twig tests, state-mandated or not.

  Mind you, simply doing away with the test in history offers no solution. The only thing worse than a statewide twig test in history is no statewide test in history at all—because then principals and superintendents will concentrate their resources on those subjects that are tested. So my final suggestion about “standardized” tests in history is this: Teachers can influence their state’s test. They can be active. Teachers can persuade first their district and then their legislators that relying on multiple-choice items is false economy. Teachers can learn about states that refuse to trust such tests—Vermont and Rhode Island, for two.42 Instead, states can ask students to prove in various ways that they possess the historical knowledge and skills that will allow them to be creative citizens. These are the same skills that equip them to be effective in the workplace, after all, and state legislators and administrators want to produce capable workers. Twig tests don’t test useful skills. Who better to create a test that does than teachers who have given a lot of thought as to why and how U.S. history should be taught and learned? In short, who better than you?!

  FOCUSED BIBLIOGRAPHY

  Robert Rosenthal and Lenore Jacobson, “Teacher Expectations for the Disadvantaged,” Scientific American, 248 #4 (4/1968), 21–25. This article sparked the now-vast literature on teacher expectations, so it is hardly the last word on it, but it still reads well and raises the key concerns.

  CHAPTER 3

  Historiography

  HISTORIOGRAPHY IS ONE OF THE GREAT GIFTS that history teachers can bestow upon their students. Unfortunately, many college history departments reserve this blessing for their history majors in upper-level courses. Every college student needs to learn the term. For that matter, every high school graduate needs to know it. (Don’t worry: I’ve taught the idea to children as young as 4th grade. If they can learn supercalifragilisticexpialidocious, they can handle “historiography.”) The term and the underlying concept are one of our best tools for developing critical reading and analytic thinking.

  Most concisely, historiography means “the study of history,” but not just “studying history.” Historiography asks us to scrutinize how a given piece of history came to be. Who wrote this book? Who put up this marker? Who didn’t put it up? What points of view were omitted?

  When I was in high school, I believed history was to be memorized, like the times tables. After all, Columbus did discover America in 1492, didn’t he? Doesn’t 8 × 8 = 64? Historians know better. History is not like arithmetic. To be sure, there is a bedrock of fact in history. What happened in 1492 happened. But that is not history. History is what we say happened. What we say about 1492 changes as we change. Historiography is the study of why and how history changes.

  A TALE OF TWO ERAS

  On the landscape, every historic site is a tale of two eras: what it’s about and when it went up. The
historical marker for the Almo massacre in southern Idaho offers an unusual exception to this rule. While researching Lies Across America: What Our Historic Sites Get Wrong, I read thousands of historical markers. Of them all, the one presented here is perhaps the most attractive—a piece of slate carved into the shape of the state of Idaho. It is also the most deceitful. It claims that on some day in 1861, Indians killed some 300 pioneers in their wagon train near present-day Almo. In reality, 300 immigrants did not perish. Thirty did not … three did not … it never happened at all.

  While this marker commemorates a massacre that never took place, it does reveal something about Idaho in 1938.

  It isn’t easy to prove a negative. I can write about what didn’t happen in Almo with such authority only because a fine Western historian, Brigham Madsen, spent much of four decades researching the topic. He makes a compelling case. First, he showed that the earliest mention of the massacre was in 1927—66 years after it allegedly took place! Other, much smaller incidents won extensive newspaper coverage at the time; but not Almo. Throughout the entire West between 1842 and 1859, of more than 400,000 pioneers crossing the Plains, fewer than 400, or less than 0.1%, were killed by American Indians.1 Two years later, almost as many more allegedly perished in one incident, yet no one noticed? Next, he found no mention of the event in the records of the Indian Service, War Department, and state and territorial bureaucracies. “A massacre involving the deaths of 294 emigrants would have engendered a massive amount of material,” he pointed out. “There is none.” Finally, Madsen found inconsistencies, even impossibilities, in the tale as told in 1927. To cite just one: supposedly the men of the party, inside their circled wagons, dug wells during the three- or four-day siege, trying in vain to reach water. Afterward, these wells proved deep enough to accommodate all 294 corpses!2

  We must conclude that this historical marker is not a tale of two eras. It has nothing to tell us about what it’s about, because in 1861 nothing much happened in southern Idaho.

  What can it tell us about 1938, when it went up?

  By the 1930s, the Almo anecdote resonated with the familiar American archetype of whooping Indians circling around the circled wagons. In reality, Indians rarely circled like that—doing so would merely have exposed them and their horses to danger. The tradition of circling Indians does not begin in the real West, but in 1883, in Buffalo Bill’s Wild West Show. There Indians had to circle because they were in a circus ring! Buffalo Bill became the biggest show business act in the world; by 1893 fifty imitators were touring the United States. Hollywood picked up the tradition, and the real West became the reel West: one-third of all Hollywood movies made before 1970 were Westerns! Today, as Western novelist Larry McMurtry put it, “Thanks largely to the movies, the lies about the West are more potent than the truths.”3

  According to Hollywood myth, made visible on the landscape in Almo, Native Americans were the foremost obstacle that pioneers faced. Actually, although Natives did defend their homes and lands against intruders, on the whole they proved to be more help than hindrance to westering white pioneers.4 All this was forgotten in the culture of white supremacy that arose during the first decades of the twentieth century. By 1938, the Sons and Daughters of Idaho Pioneers were happy to believe any story about savage Indians, with no basis beyond a writer’s memory of something “an old trapper” told him around 1875. Thus, the Almo marker does deserve to be on display, perhaps in the Idaho State Historical Society Museum. There it could be accompanied by a label, “Artifact of 1938,” that explains to visitors what the marker reveals about white culture in southern Idaho.

  As soon as students understand historiography, they get twice as much stimulation from every historic site they visit. I have watched as students poke around at the rear of a war memorial until they find the date when it went up. Now they can put it in context: This is what South Carolina said about Gettysburg in 1965. Simply adding the last two words hangs a context around whatever the monument says. No longer is it written in stone; now it is written in 1965. Intrinsic to that context is the possibility of a different context—1865, perhaps, or 2015. Thinking critically about the message cannot lag far behind.

  THE CIVIL RIGHTS MOVEMENT, COGNITIVE DISSONANCE, AND HISTORIOGRAPHY

  If every historic site is a tale of two eras, the same holds for every historic source. Different editions of the same textbook provide an inexpensive introduction to historiography. Treatments of John Brown offer a case in point. As white supremacy increasingly pervaded American culture during the Nadir of race relations in the early years of the twentieth century, textbook authors increasingly saw the abolitionist as insane. Indeed, they even lost sympathy for nonviolent abolitionists like William Lloyd Garrison, considering them somehow responsible for the Civil War. Not until the Civil Rights Movement of the 1960s was white America freed from enough of its racism to accept that a white person did not have to be crazy to die for black equality. In a sense, the murders of Michael Schwerner and Andrew Goodman in Mississippi, James Reeb and Viola Liuzzo in Alabama, and various other whites in various other southern states during the Civil Rights Movement enabled textbook writers to see sanity again in John Brown.

  John Brown’s sanity provides an inadvertent index of the level of white racism in our society, and students can trace Brown’s sanity level in textbooks published from 1870 to the present. Different editions of the same book offer a particularly effective demonstration of the importance of historiography. Rise of the American Nation, probably the best-selling textbook of the 1960s and 1970s, labeled Brown’s 1859 Harpers Ferry plan “a wild idea, certain to fail,” in 1961. The same textbook in 1986, now retitled Triumph of the American Nation, called it “a bold idea, but almost certain to fail.” How economical! The typesetter changed wi to bo and added “almost,” and students can see Brown becoming more sane, right before their eyes.

  I asked a class of 5th-graders what John Brown had done between 1961 and 1986 to cause this change in his treatment. “Nothin’,” they chorused. “Well, he moldered a little more!” volunteered one. Then I asked the class, “What changes took place in American society between 1961 and 1986 to cause this change?” Immediately a student replied, “The Civil Rights Movement.” Right before my eyes, they were doing historiography. They now understood that when one writes about John Brown helps to determine what one writes about him. When students return to their textbook, they realize they are reading what this book, published in its specific year, says.

  Why did the Civil Rights Movement cause such change in how historians wrote? Cognitive dissonance, an important concept developed by the social psychologist Leon Festinger, can explain the transformation. Cognitive dissonance has such important implications for history that every high school graduate should understand it. Briefly, Festinger teaches that when our ideas or ideals are out of harmony with our actions, there is a mental push to change our ideas. We cannot change what we have done, after all, and often we cannot easily change what we are doing.

  Two portraits of John Brown show him edging back toward sanity as the twentieth century drew to a close. First is Brown as John Steuart Curry saw him in 1937. This gaunt man is obviously disturbed; in Curry’s mural at the Kansas State Capitol, Brown’s outstretched hands drip blood. Like the tornado behind him, he spins out of control.

  Next is how the New York Times Book Review would have readers see Brown in 1998—as a religious fanatic. “He took orders only from God,” headlined the Times. In reality, Brown never claimed God dictated his plans. “I see a book kissed here which I suppose to be the Bible, or at least the New Testament, which teaches me that all things whatsoever I would that men should do to me, I should do even so to them,” said Brown in his last words to the court at his treason trial in 1859. “It teaches me, further, to remember them that are in bonds as bound with them. I endeavored to act up to that instruction.” That teaching is Jesus’s “Golden Rule.” It can indeed be found in the New Testament and does imply that
slavery is un-Christian. Rather than present Brown’s religious motivation honestly, the Times parodies it. Even in 1998, more or less our time, neither its illustration nor its prose helps readers see Brown as he was.

  Festinger devised ingenious experiments to demonstrate cognitive dissonance. For instance, he recruited undergraduates to participate in a boring task involving repetitive hand motions. After an hour, when time was up, he asked each to tell the next student that it was interesting. Some subjects he paid $20. Others got only $1. The $20 recipients did not know that others had got less; the $1 participants never knew that others had got more; hence no invidious distinctions could be drawn. Finally, he asked each student a series of questions about the hourlong session: Did they find it boring or interesting? Did they think it was useful to science or probably useless?5 Take a moment and ask yourself: responding to the final questions, who were probably more positive about the experiment, the $20 recipients or those paid only $1?

  The answer surprises most audience members: the $1 folks rated the task more interesting than those paid $20. Why? Put yourself in the shoes of a $20 recipient. You have wasted an hour of your life doing a dumb experiment. You even lied—well, maybe only fibbed—to a fellow student, about how it was actually interesting. But then, you did get $20 for it, not bad for an hour’s work. Now empathize with a $1 recipient. You spent an hour doing a dumb experiment. You even told a fellow student that it was actually interesting. For this you got exactly $1. You are a jerk! A patsy!

  Few of us like to say that about ourselves. We cannot go back and retrieve the hour we have wasted. We can modify our opinion about it. It wasn’t so bad. It was kind of interesting. It probably benefitted science. So we become more positive about the experience than those paid $20. Nor are we lying now. Our more positive opinions are our real opinions, just as real as those held by the $20 recipients. We did not first hold negative views, then change them overtly when asked at the end. But we did alter our opinions to bring them into harmony with what we did. Modifying one’s opinions to bring them into line with one’s actions or planned actions is the most common resolution of cognitive dissonance.

 

‹ Prev