The One World Schoolhouse: Education Reimagined

Home > Other > The One World Schoolhouse: Education Reimagined > Page 14
The One World Schoolhouse: Education Reimagined Page 14

by Salman Khan

There is a certain irony here. I entered teaching as the tutor of a twelve-year-old girl. To be perfectly honest, adult education was an afterthought. In fact, I’ll go further. As I muddled along in my tinkering and pragmatic way, without assumptions or theory, I really didn’t consider lifelong learning at all. Yet it turns out that what I was trying to accomplish with the kids was to foster an atmosphere and an attitude that came closer to that of adult learners. I inadvertently bumped into an idea that Knowles had already explored: Maybe androgogy—self-directed learning with the teacher as guide rather than director—may be more appropriate for everyone.

  PART 4

  The One World Schoolhouse

  Embracing Uncertainty

  Here is a remarkable thought: Among the world’s children starting grade school this year, 65 percent will end up doing jobs that haven’t even been invented yet.

  This projection, while impossible to prove, comes from a highly respected and responsible source, Cathy N. Davidson, a Duke University professor who is also the codirector of the MacArthur Foundation Digital Media and Learning Competitions.1 And after all, once we get over the shock of that sheer number, the projection seems entirely plausible. Grade school students in the 1960s had no way of foreseeing that the hot spot in job creation and economic growth during the 1970s and ’80s would come from various aspects of the personal computing industry—an industry that didn’t exist in the Age of Woodstock. As recently as the 1980s, no one planned to make his or her living through the Internet, since the Internet existed nowhere but in the hushed and secret corridors of DARPA. Even more recently, how many kids, teachers, or parents realized that little Sally might end up working in advanced genomics, while Johnny became an entrepreneur in social media, Tabitha became an engineer in cloud computing, and Pedro designed apps for iPhones?

  None of these developments was foreseeable ten or fifteen years before the fact, and given the tendency of change to feed on itself and keep accelerating, it’s a safe bet that a decade from today there will be even more surprises. No one is smart enough to know what will happen tomorrow—or, for that matter, in the next hour, minute, or nanosecond—let alone half a generation down the line.

  The certainty of change, coupled with the complete uncertainty as to the precise nature of the change, has profound and complex implications for our approach to education. For me, though, the most basic takeaway is crystal clear: Since we can’t predict exactly what today’s young people will need to know in ten or twenty years, what we teach them is less important than how they learn to teach themselves.

  Sure, kids need to have a grounding in basic math and science; they need to understand how language works so they can communicate effectively and with nuance; they should have some awareness of history and politics so as to feel at home in the world, and some conversance with art in order to appreciate the human thirst for the sublime. Beyond these fundamentals, however, the crucial task of education is to teach kids how to learn. To lead them to want to learn. To nurture curiosity, to encourage wonder, and to instill confidence so that later on they’ll have the tools for finding answers to the many questions we don’t yet know how to ask.

  In these regards, conventional education, with its emphasis on rote memorization, artificially sequestered concepts, and one-size-fits-all curricula geared too narrowly toward testing, is clearly failing us. At a time when unprecedented change demands unprecedented flexibility, conventional education continues to be brittle. As our increasingly interconnected world cries out for more minds, more innovators, more of a spirit of inclusion, conventional education continues to discourage and exclude. At a time of stubborn and worldwide economic difficulties, the conventional educational establishment seems oddly blind (or tragically resistant) to readily available technology-based solutions for making education not only better but more affordable, accessible to far more people in far more places.

  In the pages that follow, I would like to propose a different sort of future for education—a more inclusive and more creative future. My vision may strike some people as a peculiar mix of ideas, because some of what I’m suggesting is quite new and some of it is very old; some of it is based on technology that has only recently come into being, and some of it harkens back to bygone wisdom about how kids actually learn and grow. Yes, I am a firm believer in the transformative power of computers and the Internet. Paradoxically, though, I am urging us forward, in part, by suggesting a return to certain older models and methods that have been cast aside in the name of “progress.”

  My Background as a Student

  When I was in tenth grade, I had an experience that proved pivotal not only for my own schooling but for the development of my entire philosophy of education. At a regional math competition in Louisiana, I first met Shantanu Sinha—the same Shantanu who is now president of the Academy. He was an acknowledged math jock, and he quickly showed me my place in the world when he beat me in the finals of the competition. But there was something else about Shantanu that impressed me even more than his sheer prowess. Chatting during the contest, he told me that as a tenth grader he was already studying pre-calculus. I myself was still taking Algebra II, although the subject had ceased to be stimulating. My understanding was that I had to stay in Algebra II, because that’s what tenth graders were taught, and there was nothing to discuss. Shantanu told me that he’d tested out of algebra and had therefore been allowed to advance.

  Testing out. What a concept. I’d had no idea that such a thing existed, though even a moment’s thought suggested that it made perfect sense. If a student could demonstrate proficiency with a certain set of ideas and processes, why not let him or her move on to more advanced ones?

  Back at my own school, full of enthusiasm, full of hope, I approached the powers that be with the possibility of testing out of my math class. My suggestion was instantly shot down by way of a dreary and all too familiar argument: If we let you do it, we’d have to let everybody do it.

  Since I was as self-involved as most people at that age, I had no interest in what other kids did or didn’t get to do; I only cared that I myself had been denied, so I sulked and misbehaved (although I did have the therapeutic release of being the lead singer in a heavy metal band). Over time, however, a broader and rather subversive question started scratching at my mind; eventually it became one of my most basic educational beliefs: If kids can advance at their own pace, and if they’d be happier and more productive that way, why not let everybody do it?

  Where was the harm? Wouldn’t kids learn more, wouldn’t their curiosity and imagination be better nourished, if they were allowed to follow their instincts and take on new challenges as they were able? If the student graduated early, wouldn’t this free up scarce resources for the students who needed it? True, this approach would call for more flexibility and more close attention to students as individual learners. To be sure, there were technical and logistical hurdles to be cleared; there were long and brittle habits that would need to be altered. But whom was education supposed to serve, after all? Was the main idea to keep school boards and vice principals in their comfort zone, or was the main idea to help students grow as thinking people?

  Looking back, I think that in some odd and embryonic way, it was that stupid and infuriating statement—If we let you do it, we’d have to let everybody do it—that cemented my commitment to self-paced learning and started me on the path of trying to make self-paced learning a possibility for everyone.

  Eventually I was able to take the math classes I wanted—but only by working around and in some sense defying the system that was in place. I started taking summer courses at a local college. My high school then “allowed” me to take basic calculus, the only calculus course they offered. I got hold of a more advanced textbook and studied on my own. My senior year I spent more time at the University of New Orleans than at my own high school.

  I was fortunate to come from a family and a community that placed a very high priority on education; my mother supported a
nd abetted my efforts to work around the system. But what about the kids whose parents didn’t care as much or were afraid to rock the boat or simply didn’t know how to help? What became of their potential, of the intellectual curiosity that had been systematically drained out of them?

  If high school persuaded me of the crucial importance of independent study and self-paced learning, it took college to convince me of the incredible inefficiency, irrelevance, and even inhumanity of the standard broadcast lecture.

  When I first arrived at MIT, I was frankly intimidated by the brainpower around me. Among my freshman cohort were kids who had represented the United States or Russia in the Math Olympiad. My very first physics lab was taught by a professor who’d won a Nobel Prize for experimentally verifying the existence of the quark. Everyone seemed smarter than I was, and aside from that it was cold! I’d never seen snow before or felt anything quite as chilly as the wind off the Charles River. Fortunately, there were a few other Louisiana kids around; one of them was Shantanu, who now went from being a high school acquaintance to a good friend and college roommate.

  As we settled into the MIT routine, Shantanu and I began independently to arrive at the same subversive but increasingly obvious conclusion: The giant lecture classes were a monumental waste of time. Three hundred students crammed into a stifling lecture hall; one professor droning through a talk he knew by heart and had delivered a hundred times before. The sixty-minute talks were bad enough; the ninety-minute talks were torture. What was the point? Was this education or an endurance contest? Was anybody actually learning anything? Why did students show up at all? Shantanu and I came up with two basic theories about this. Kids went to the lectures either because their parents were paying x number of dollars per, or because many of the lecturers were academic celebrities, so there was an element of show business involved.

  Be that as it may, we couldn’t help noticing that many of the students who religiously attended every lecture were the same ones most desperately cramming the night before the exam. Why? The reason, it seemed to me, was that until the cramming phase they’d approached the subject matter only passively. They’d dutifully sat in class and let concepts wash over them; they’d expected to learn by osmosis, but it hadn’t quite worked out because they’d never really engaged. To be clear, I don’t blame my fellow students for finding themselves in this situation; as good and diligent pupils, they’d put their faith in what was, after all, the prescribed approach. Unfortunately, as we’ve seen in our discussion of attention spans and active versus passive learning, the prescribed approach was completely out of synch with the realities of human capability.

  Shantanu and I soon found ourselves a part of a small but visible and slightly notorious MIT subculture—the class skippers. I don’t recommend this for everyone, but it worked for us. To be sure, skipping class can easily become an excuse for, or a symptom of, simply goofing off. To us, honestly, it seemed like a more productive and responsible use of our time. Would we learn more sitting passively in a lecture for an hour and a half, or engaging actively with a textbook—or with online videos and interactive assessments, if only they’d been available at the time? Would we be more enriched by watching a professor’s presentation, or by deriving equations and writing software ourselves? Even as freshmen, we concluded that our class-skipping approach was working; we didn’t need to cram at the end of a semester and we didn’t stress about solving problems on a test, because that’s what we’d been doing all along.

  We soon became acquainted with some upperclassmen who were taking eight or nine courses a term (about double the typical MIT student’s already rigorous course load), and who challenged us to take extra courses as well. Without doubt, these guys were bright, but not freakishly so; their argument, in fact, was that any of us—not just at MIT but at every high school and university—should be able to handle twice as many courses if we avoided the seat time and simply pursued whatever actually helped us learn. There was no hocus-pocus here, no miracle shortcuts to academic success. It took discipline and work, quite a lot of each. But the idea was to work effectively, naturally, and independently.

  I want to pause a moment to consider this somewhat radical thought, which dovetailed perfectly with my own beliefs and in turn helped shape my eventual approach to teaching and learning. Could people actually learn twice as much as was generally expected of them? It seemed ambitious… but why not? As we saw in the discussion of the Prussian roots of our school system, the original aim of educators was not necessarily to produce the smartest students possible, but to turn out tractable and standardized citizens, workers who knew enough. To this end, attention was given not to what students could learn, but to the bare minimum of what they had to learn.

  Now, I am not imputing such Machiavellian motives to contemporary educators; but I am suggesting that some of the habits and assumptions that have come down from the eighteenth-century model still steer and limit what students learn. Conventional curricula don’t only tell students where to start; they tell students where to stop. A series of lessons ends; that subject is over. Why aren’t students encouraged to go farther and deeper—to learn twice as much? Probably for the same reason we consider 70 percent a passing grade. Our standards are too low. We’re so squeamish and embarrassed about the very notion of “failure” that we end up diluting and devaluing the idea of success. We limit what students believe they can do by selling short what we expect them to do.

  Coming back to MIT, Shantanu and I did take on something close to double course loads, and we both graduated with high GPAs and multiple degrees. And it wasn’t because we were any smarter or harder-working than our peers. It was because we didn’t waste time sitting passively in class. Understand, this is not a knock against MIT itself, which I thought was a magical place full of dazzlingly creative people doing amazing things. Further, MIT was very forward-thinking in letting students take as many courses as they wanted. My criticism is not of the institution but of the tired old habit of the passive lecture.

  Replace that with active learning, and I believe that most and very possibly all of us are capable of taking in much more than is currently expected of us. We can go much farther, and get there far more efficiently, with self-paced study, mentoring, and hands-on experiences. We can reach more ambitious goals if we are given the latitude to set those goals for ourselves.

  The Spirit of the One Room Schoolhouse

  Most educated people today attended school with children their own age and then remained with this same age-determined cohort throughout their elementary and secondary education, and even onward through college and graduate school. This basic model—grouping kids by birth date and then advancing them together grade by grade—is such a fundamental aspect of conventional education that people seldom seem to think about it. But we should, because its implications are huge.

  First of all, let’s remember that this age-group pattern did not always exist; like everything else about our educational habits, it is a human construct and a response to certain conditions in certain places and times. Before the Industrial Revolution, it was very much the exception to lump schoolkids together by age; it just wasn’t practical, given that most people lived on farms and the population was spread very thin. With industrialization came urbanization, and the new population density set the stage for multiroom schools. Kids needed to be divided up somehow, and forming classes by age seemed a logical choice. But there was a whole raft of implications that went along with sequestering kids by age, and these have turned out to be a very mixed blessing.

  Not to pick on the Prussians again, but as we’ve seen, the Prussian model is largely based on dividing human knowledge into artificially constrained chunks. Massive and flowing areas of human thought are diced up into stand-alone “subjects.” The school day is rigorously divided into “periods,” such that when the bell rings, discussion and exploration are lopped off. The strict grouping of students by age provides yet another axis along which education co
uld be sliced up, compartmentalized, and therefore controlled.

  Arguably, this separation by age is the most powerful division of all, because it has allowed for the development of set curricula and ultimately arbitrary but consensual standards of what kids should learn at a given grade level. Expectations move in lockstep, as though all eight- or ten- or twelve-year-olds were interchangeable. Once kids were grouped by age, targets seemed clear and testing was straightforward. It all seemed quite scientific and advanced, and it proved very convenient for administrators. But little or no attention was paid to what was lost along the way.

  To state what should be obvious, there is nothing natural about segregating kids by age. That isn’t how families work; it isn’t what the world looks like; and it runs counter to the way that kids have learned and socialized for most of human history. Even the Mickey Mouse Club included kids of different ages, and as anyone who’s ever spent time around children can tell you, both younger and older kids benefit when different ages mix. The older ones take responsibility for the younger ones. (I see this even between my three-year-old and my one-year-old—and, trust me, it’s a remarkable thing to behold.) The younger ones look up to and emulate the older ones. Everyone seems to act more mature. Both younger and older rise to the occasion.

  Take away this mix of ages and everybody loses something. Younger kids lose heroes and idols and mentors. Perhaps even more damagingly, older kids are deprived of a chance to be leaders, to exercise responsibility, and are thereby infantilized.

  Let’s consider this a moment. Of late there has been much hand-wringing about the state of mind of contemporary teenagers—a seemingly widespread malaise found everywhere from New York to Berlin to Bahrain, and whose symptoms run the gamut from mere slackerism all the way to suicide. I would suggest that at least a significant part of the problem is our failure to entrust adolescents with real responsibility. Yes, we stress them out with demands and competition… but only to do with themselves. We deny them the chance to mentor or help others, and we thereby conspire in their isolation and self-involvement. Biologically, kids start becoming grown-ups around the age of twelve. That’s when they can reproduce, and while I’m certainly not advocating teenage parenthood, I do believe that nature would not have made it possible unless adolescents were also wired to be ready to take responsibility for others. High school kids are burgeoning adults, but by narrowly restricting them to the companionship of their peers, responsible for no one but themselves, we treat them as children—and children they tend to remain.

 

‹ Prev