by Salman Khan
This suggests the seemingly obvious how of fixing gaps and lapses: Go back and revisit until the concept makes more sense; better still, try to actively apply the concept in a new context. Since neuroscience confirms our intuitive understanding that things are more easily learned the second time through, the review should not be onerous. Further, since repetition is an essential part of learning—a physical part of learning, in the creation and strengthening of neural pathways—this revisiting of a subject should result in a deeper and more durable grasp of it.
That part is simple. The harder question is this: Who is going to take the initiative and responsibility for seeing gaps and conducting reviews of past material to correct them?
In a standard classroom setting, it’s very unlikely that the teacher will be able to identify every learning gap of each individual student. And even if he could, he would not be able to lead customized reviews on a case-by-case basis. There simply isn’t enough classroom time for that, especially if the bulk of it is devoted to lecturing. Besides, the next unit is already looming. The class must move on.
By default, then, the ultimate responsibility for reviewing past lessons falls to the student. But will she follow through on this responsibility? Traditional classroom models make this difficult. The whole thrust of her education has taught her to be passive—to sit still, absorb, and eventually parrot back. Now she’s being asked to be thoroughly proactive, to diagnose her own difficulties and actively see to their resolution. That’s asking an awful lot of a student who has been trained to do the opposite.
Even if she can muster the clarity and the will to undertake an independent review of a troublesome subject, will she have access to the materials she needs? What if that material was in last year’s textbook, now given back or discarded? What if she has some idea of what to look for but can’t remember where to look? Clearly, there are difficulties here, and the difficulties work against the goal of helping students claim ownership of their own educations.
In principle, there is a fairly simple fix for this. The fix consists of two related strands.
First, students should be encouraged, at every stage of the learning process, to adopt an active stance toward their education. They shouldn’t just take things in; they should figure things out. This is an extremely valuable habit to inculcate, since in the modern world of work no one tells you what formula to plug in; success lies in the ability to solve problems in novel and creative ways. Besides, if you think about it, asking kids to be active is nothing more than asking them to be their natural selves. Is it natural for kids to sit quietly for an hour, listening? No, it’s natural for kids to want to do something, to be busy with work or play, to interact. Students are not naturally passive. Perversely, they need to be taught to be passive; the passivity then becomes a habit that makes them more tractable, perhaps, but less alert, less engaged in what they’re doing. This trade-off may be helpful for maintaining order in a jam-packed conventional classroom, but that doesn’t mean it’s the best way for students to learn.
Active learning, owned learning, also begins with giving each student the freedom to determine where and when the learning will occur. This is the beauty of the Internet and the personal computer. If someone wants to study the quadratic equation on his back porch at 3 a.m., he can. If someone thinks best in a coffee shop or on the sideline of a soccer field, no problem. Haven’t we all come across kids who seem bright and alert except when they’re in class? Isn’t it clear that there are morning people and night people? The radical portability of Internet-based education allows students to learn in accordance with their own personal rhythms, and therefore most efficiently.
Corollary to this is the idea of self-paced learning, which gives the individual student control over tempo, as well as over where and when. The same person will learn at different rates on different days or when dealing with different subjects. But in a conventional classroom there is a single tempo tapped out by a single person—the teacher. Bound to this lockstep beat, the students who catch on quickest will soon become bored and zone out; perversely, they may even become discipline problems just as a way of keeping occupied. The students who need the most time will still be left behind. The tempo will be perfectly suited only for some hypothetical student in the middle of the curve. It’s a case of one-size-fits-few.
With self-paced learning, by contrast, the tempo is right for every student because it is set by every student. If a given concept is easily grasped, one can sprint ahead, outrunning boredom. If a subject is proving difficult, it’s possible to hit the pause button, or to go back and do more problems as necessary, without embarrassment and without asking the whole class to slow down.
Portability and self-pacing, then, are essential aids to active, self-motivated learning. For a student to truly take ownership of his education, however, there’s another resource that’s required: easy and ongoing access to the lessons that have come before. This is where Internet-based learning offers a huge advantage over textbooks and other conventional materials. The lessons never disappear. Figuratively speaking, the blackboard is never erased, the books are never thrown away or given back. Students are encouraged to review because they can be confident that they will find what they are looking for, right there in their own computers. Even better, if the software knows when the student last visited a topic, it can directly make a review happen. This is analogous to your eleventh-grade biology teacher walking up to you in the hallway when you’re in twelfth grade and asking you to explain photosynthesis.
Moreover, Internet-based learning has advantages not only for reviewing particular lessons, but for forging a deeper and more durable understanding of the associations between lessons. On the Internet we are not constrained by classroom walls, bells that dictate when a class is over, or state-mandated curricula. A topic can be covered in multiple ways though many different lenses across many superficially different subject areas.
This kind of learning fosters not only a deeper level of knowledge, but excitement and a sense of wonder as well. Nurturing this sense of wonder should be education’s highest goal; failing to nurture it is the central tragedy of our current system.
PART 2
The Broken Model
Questioning Customs
Ignorance and a narrow education lay the foundation of vice, and imitation and custom rear it up.
—MARY ASTELL
The despotism of custom is everywhere the standing hindrance to human advancement.
—JOHN STUART MILL
Normal is what you’re used to.
It seems to be a part of human nature that customs and institutions come to seem somehow inevitable and preordained. This sense, even when it is illusory, gives a stubborn staying power to habits and systems that have been around a while—even after it’s become clear that they’re no longer working very well. This is certainly the case with the educational system that most of us have known. It’s so big that it’s hard to see around it. It’s so complexly integrated with other aspects of our culture that it’s daunting to imagine a world without it.
If we are to muster the vision and the will to meaningfully change education—to bring teaching and learning into closer alignment with the contemporary world as it really is—one of the leaps we need to make is to understand that the currently dominant educational model was not, in fact, inevitable. It is a human construct. It evolved along a certain pathway; other pathways were also possible. Parts of the system we now hold sacred—for example, the length of the class period or the number of years assigned to “elementary” or “high” school—are in fact rather arbitrary, even accidental. Things that are now considered orthodox were at various points regarded as controversial and radical.
Still, changing a system of such vast inertia and long tenure is clearly not easy. It’s not just that tradition tends to cramp imagination; it’s also that our educational system is intertwined with many other customs and institutions. Changing education would therefore
lead to changes in other aspects of our society as well. It is my firm belief that over time this would be a very good thing; in the near term, however, such a prospect necessarily suggests disruptions and anxieties.
Let me offer an analogy that I hope will drive home the enormity of the challenge that we face. Consider something as basic as the habit of eating three meals a day.
Is there some biological imperative dictating that we should eat breakfast, lunch, and dinner versus two or four or five meals? Some Buddhist monks eat one meal a day at midday. There is some recent evidence that suggests alternate-day fasting might also be a healthy option.1
Why, then, do most of us cling to the habit of breakfast, lunch, and dinner, even though most of us today do much less manual labor than our ancestors who started this custom? The answer is simply this: It’s what we’ve always done, just as we’ve always sent our kids to certain kinds of schools that operate in certain kinds of ways. It’s a cultural habit that we take for granted.
Moreover, since we are social creatures and since our interwoven lives consist of many interconnected facets, the custom of three meals a day has become part of a matrix of many other activities. The workday allows for a lunch hour. Local economies depend on restaurants serving dinner, employing staff, collecting sales tax, and so forth. Insofar as families still sit down together, it is consensual mealtimes that most often bring them together.
For all these reasons, it would be exceedingly difficult to change the culture of breakfast, lunch, and dinner. The implications of such a change would be seismic. The whole rhythm of the workplace world would be altered. Entire industries would be challenged to adjust. Even the television schedule would need to shift.
As with our eating habits, so with our teaching habits.
Entire industries and some of our very largest professions depend on the persistence of our current system. Other social institutions—like giant publishers and test-prep companies—are synched to its workings. A certain teaching method implies certain goals and certain tests. The tests, in turn, have a serious impact on hiring practices and career advancement. Human nature being what it is, those who prosper under a given system tend to become supporters of that system. Thus the powerful tend to have a bias toward the status quo; our educational customs tend to perpetuate themselves, and because they interconnect with so many other aspects of our culture, they are extraordinarily difficult to change.
Difficult, but not impossible. What’s needed, in my view, is a perspective that allows us a fresh look at our most basic assumptions about teaching and learning, a perspective that takes nothing for granted and focuses on the simple but crucial questions of what works, what doesn’t work, and why. To gain that perspective, it’s helpful to look at the basics of our standard Western classroom model, to blow the dust off and to remind ourselves how the system came to be the way it is. It’s also useful—and humbling—to realize that the debates and controversies currently surrounding education tend not to be new arguments at all; similar conflicts have been raging among people of passion and goodwill since teaching and learning began.
The basics of the standard educational model are remarkably stubborn and uniform: Go to a school building at seven or eight in the morning; sit through a succession of class periods of forty to sixty minutes, in which the teachers mainly talk and the students mainly listen; build in some time for lunch and physical exercise; go home to do homework. In the standard curriculum, vast and beautiful areas of human thought are artificially chopped into manageable chunks called “subjects.” Concepts that should flow into one another like ocean currents are dammed up into “units.” Students are “tracked” in a manner that creepily recalls Aldous Huxley’s Brave New World and completely ignores the wonderful variety and nuance that distinguish human intelligence, imagination, and talent.
Such is the basic model—schematically simple in ways that mask or even deny the endless complexities of teaching and learning. For all its flaws, however, the standard model has one huge advantage over all other possible education methods: It’s there. It’s in place. It has tenure. The tendency is to believe that it has to be there.
Yet even the briefest survey of the history of education reveals that there is nothing inevitable or preordained about our currently dominant classroom model. Like every other system put in place by human beings, education is an invention, a work in progress. It has reflected, at various periods, the political, economic, and technological realities of its times, as well as the braking power of vested interests. In short, education has evolved, though not always in a timely manner, or before some unfortunate cohort of young people—a decade’s worth? a generation’s worth?—has been subjected to obsolete teachings that failed to prepare them for productive and successful futures.
It is time—past time—for education to evolve again. But if we hope to gain a clearer idea of where we need to go, it’s useful to have at least a rudimentary awareness of where we’ve been.
Let’s begin at the beginning. How did teaching start?
As it was succinctly put in a recent article by an educator named Erin Murphy in the Wharton School’s online journal, the Beacon, the earliest forms of teaching and learning were essentially a case of “monkey see, monkey do.” In preliterate hunter-gatherer societies, parents taught their children the basic survival skills by practicing them themselves and, whenever possible, inserting an element of play into the process. This form of teaching was simply an extension of the way other animals also taught their young. Lion cubs, for example, learn to hunt by mimicking the stalking postures and strategies of their parents, and turning the exercise into a game. In the case of both lions and early humans, the stakes in education were of the highest order. The offspring who learned their lessons well went on to prosper and reproduce. In the unforgiving environment of the savanna, the kids who didn’t pay attention or never quite caught on were not around very long. To flunk was to perish.
As human language developed—language itself being a technology that radically changed and expanded our ways of sharing information—societies grew more complex and more specialized, and there came to be areas of desirable skills and knowledge that were beyond the abilities of parents alone to teach. This gave rise, at various times and in various forms, to the apprentice system. Significantly, the apprentice system marked the first time in human history that the main responsibility for education was shifted away from the family; this, of course, gave rise to a debate that has never yet died down about the respective roles of parents versus outside authorities in the education of children. Absent the bonds of family affection, the apprentice system was also the first time there was a clear, hierarchical distinction between the master/teacher and the apprentice/student. The master taught and ruled; the student submitted and learned.
Still, the manner of learning was a long way from the passive absorption of the more recent classroom model. Apprenticeship was based on active learning—learning by doing. The apprentice observed and mimicked the techniques and strategies of the master; in this regard, the apprentice system was a logical extension of learning by imitating a parent.
The apprentice system was also the world’s first version of vocational school. It was a place to learn a trade—though in certain instances the trade in question could be extremely highbrow. Many associate the apprentice system with artisans like blacksmiths or carpenters, but it has also historically been the primary mode of education for future scholars and artists. In fact, even today’s doctoral programs are really apprenticeships where a junior researcher (the PhD candidate) learns by doing research under and alongside a professor. Medical residency programs are also really apprenticeships.
Be that as it may, the apprentice system in general represented one side of a schism—those who believe that education should, above all, be practical, aimed at giving students the skills and information they need to make a living—that has existed for thousands of years, and exists still. On the other side are those wh
o feel that seeking knowledge is an ennobling process worth pursuing for its own sake.
The preeminent representatives of this latter point of view were of course the Athenian Greeks of classical times. Plato, in the dialogue Gorgias, ascribes to Socrates, his alter ego and ideal man, the following statement: “Renouncing the honors at which the world aims, I desire only to know the truth.” Clearly, there’s a feisty and even defiant value judgment being made here, a slap at mere practicality. Aristotle, in the very first line of his Metaphysics, asserts that “all men naturally desire knowledge.” He doesn’t say marketable skills. He doesn’t say the right credentials to get a job. He’s talking about learning for the sake of learning, and he’s positing that impulse as the very definition of what it means to be human. This is a long way from the model of apprenticeship as a way of learning to tan hides or carve stones or even treat patients.
There is much that is appealing in Plato’s and Aristotle’s pure approach to learning as a deep search for truth; this is, in fact, the mind-set that I hope to bring to students through my videos. However, there are a couple of serious problems with the model of the classic Greek academy. The first is that it was elitist—far more so than even today’s most exclusive prep schools. The young men who could afford to hang around discussing the good and the true were oligarchs. Their families owned slaves. None of these students really needed to care about how to harvest crops or weave textiles. Real work, even work that was intellectual, was beneath them.
This led to a second, more destructive problem that still exists today. Once the pure search for truth was posited as the highest good, it followed that anything merely useful would be regarded as less good. Practical learning—learning that might actually help one do a job—was regarded as somehow dirty. And this prejudice pertained even to practical subjects—as, for example, finance or statistics—that are intellectually very rich and challenging.