The One World Schoolhouse: Education Reimagined

Home > Other > The One World Schoolhouse: Education Reimagined > Page 11
The One World Schoolhouse: Education Reimagined Page 11

by Salman Khan


  Best of all, when students nailed ten problems in a row—a feat that generally seemed quite daunting at the start—they really felt that they’d accomplished something. Their confidence and self-esteem had been boosted, and they looked forward to the challenge of the next, more difficult concept.

  The Leap to a Real Classroom

  Let’s now jump to early 2007.

  As of then, several thousand students were using the Khan Academy videos that had recently begun to be posted on YouTube. Of those thousands, some hundreds were also availing themselves of the problem-generating capability of the site. Clearly, the Academy was growing beyond my handful of tutees; word of mouth was spreading and the exponential viral growth was not too far off in the future. This was gratifying, of course, but there was also something surreal about it. I was accustomed to having a personal relationship with everyone I tutored. Now, except for my cousins and family friends, I didn’t really know my students, except through their work and occasional emails; I felt a little bit like a doctor who analyzes lab results but doesn’t see patients.

  I hadn’t yet had the privilege and the challenge of interfacing with real-world classroom teachers and students. The problem-generating software and the rather basic feedback loop worked well enough for me; would they work for anybody else? What refinements or criticisms would be offered by the professionals who were actually in the trenches? Would teachers embrace the online video concept or feel threatened by it? Would the ideas I had been experimenting with be most productively used as a full curriculum or an add-on?

  Eager to see firsthand how students interacted with the software and the videos, I was excited when, through a friend, I was introduced to a teacher who was helping to run a summer program here in the Bay Area. The program is known as Peninsula Bridge, and its mandate is to provide educational opportunity to motivated middle-school kids from underresourced schools and neighborhoods; toward this end, a number of the Bay Area’s most prestigious private schools donate the use of their facilities. Once a student is accepted, he or she is invited, tuition-free, to attend a summer session.

  I was eager to participate, but first I had to convince the faculty and the board that I had something to offer. I have to admit that this “audition” made me nervous. This was odd. In my work at the hedge fund I routinely went, unfazed, into meetings with CEOs and CFOs of major corporations; I breezed through discussions regarding investment choices where tens of millions of dollars (and possibly my job) were at stake. Now I was walking into very informal meetings with like-minded and generous-spirited people, and I was as jumpy as a teenager on his very first date.

  My first conversation was with a woman named Ryanne Saddler, a history teacher and the summer site director at the Castilleja School, one of the institutions that lent its campus to Peninsula Bridge. I was so excited to have access to an actual member of the education establishment that I talked a mile a minute as I laid out the basics of what I had been working on—the videos, the self-paced exercises, the knowledge map, the feedback dashboard. Ryanne seemed to like what she heard, but since she herself was not a math teacher she suggested I do my dog-and-pony show in a meeting with the full board. I readily agreed, and as we were parting, Ryanne casually said, “This will all work on Macs, right?”

  “Of course!” I said confidently.

  This was a fib. I didn’t own a Mac and I had no idea if my software would run on one. I went straight to the local computer store, bought a MacBook, and then pulled an all-nighter, hacking my way through to make everything—well, mostly everything—compatible.

  If this was a somewhat shaky beginning to my relationship with real-world education, the omens soon got even worse. My meeting with the board was scheduled for March 15. By coincidence—or cruel fate—this was also the date on which my domain name, khanacademy.org, came up for renewal. Unbeknownst to me, the credit card I had left on file with the domain host had expired. And so as a gentle reminder that I owed them $12, the hosting company shut down the site. No warning, no grace period. On what was up to then the most important morning of the Academy’s young existence.

  The realization that the site was down had a strange effect on me; it made me very calm. Before that, I’d been a nervous wreck, wondering what gave me the gall to believe I could change the way education happens with my rather rustic, handcrafted videos and software. Now I realized that I had no chance. A guy comes to show off his website, except that he has no website. What a loser! Accepting defeat before I even started, I went into the meeting equipped with an old-fashioned slideshow and the videos that were on YouTube.

  At Ryanne’s suggestion, I showed a video that I had done on “Basic Addition,” which I felt was clumsily made, and possibly even cheesy—I still cringe when I hear my own voice. Luckily, everyone else seemed mildly entertained to hear a grown man count avocados while seeing shaky handwritten text appear on a virtual blackboard. They concluded that the Academy might be of real use in fulfilling their goal of getting kids ready to face algebra. They seemed as excited as I was to give it a try.

  It turned out that Peninsula Bridge used the video lessons and software at three of its campuses that summer. Some of the ground rules were clear. The Academy would be used in addition to, not in place of, a traditional math curriculum. The videos would only be used during “computer time,” a slot that was shared with learning other tools such as Adobe Photoshop and Illustrator. Even within this structure, however, there were some important decisions to be made; the decisions, in turn, transformed the Peninsula Bridge experience into a fascinating and in some ways surprising test case.

  The first decision was the question of where in math the kids should start. The Academy math curriculum began, literally, with 1 + 1 = 2. But the campers were mainly sixth to eighth graders. True, most of them had serious gaps in their understanding of math and many were working below their grade level. Still, wouldn’t it be a bit insulting and a waste of time to start them with basic addition? I thought so, and so I proposed beginning at what would normally be considered fifth-grade material, in order to allow for some review. To my surprise, however, two of the three teachers who were actually implementing the plan said they preferred to start at the very beginning. Since the classes had been randomly chosen, we thereby ended up with a small but classic controlled experiment.

  The first assumption to be challenged was that middle-school students would find basic arithmetic far too easy. Among the groups that had started with 1 + 1, most of the kids, as expected, rocketed through the early concepts. But some didn’t. A few got stuck on things as fundamental as two-digit subtraction problems. Some had clearly never learned their multiplication tables. Others were lacking basic skills regarding fractions or division. I stress that these were motivated and intelligent kids. But for whatever reason, the Swiss cheese gaps in their learning had started creeping in at a distressingly early stage, and until those gaps were repaired they had little chance of mastering algebra and beyond.

  The good news, however, is that once identified, those gaps could be repaired, and that when the shaky foundation had been rebuilt, the kids were able to advance quite smoothly.

  This was in vivid and unexpected contrast to the group that had started at the fifth-grade level. Since they’d begun with such a big head start, I assumed that by the end of the six-week program they would be working on far more advanced concepts than the other group. In fact just the opposite happened. As in the classic story of the tortoise and the hare, the 1 + 1 group plodded and plodded and eventually passed them right by. Some of the students in the “head start” group, on the other hand, hit a wall and just couldn’t seem to progress. There were sixth- and seventh-grade concepts that they simply couldn’t seem to master, presumably because of gaps in earlier concepts. In comparing the performance of the two groups, the conclusion seemed abundantly clear: Nearly all the students needed some degree of remediation, and the time spent on finding and fixing the gaps turned out both to save time
and deepen learning in the longer term.

  But how did we discover where the gaps were, how big a hurdle they posed, and when they’d been adequately filled in?

  As I’ve mentioned, I had already designed a pretty basic database that allowed me to keep track of the progress of my own tutees. But now I was working with experienced classroom teachers who pointed the way to significant refinements in the feedback system. A few days into the camp schedule, one of these teachers, Christine Hemiup, emailed me to say that while the existing functionality was spiffy, what she really needed was a simple way to identify when students were “stuck.”

  This, in turn, led to a meditation on the concept of “stuckness.” Learning, after all, always entails a degree of being “stuck,” if only for a moment, on the cusp between what one doesn’t know and what one has come to understand. So I realized that, as in the case of mastery, I would have to come up with a somewhat arbitrary heuristic for defining “stuckness.” I settled on this: If a student attempted fifty problems and at no point got ten in a row right, then he or she was “stuck.” (This heuristic has by now been refined using fairly advanced techniques, but the general idea is the same—figure out who could most use help from the teacher or another student.)

  This rough definition of “stuckness” served well enough as a framework, but still left the question of how best to get the information to the teacher. Christine suggested a daily spreadsheet with each student represented by a row, and each concept pictured as a column. At every intersection of student and concept there would be a “cell,” in which we could put information as to how many problems had been worked, the number right or wrong, the longest streak, and time spent. The spreadsheet would provide a simple and graphic account of who was stuck and where.

  As it turned out, the feedback spreadsheet was much more than a tidy graphic feature; it fundamentally altered the dynamic of the classroom. Once again, the use of technology made the classroom more human by facilitating one-on-one interactions; by letting the teacher know who needed her attention most. Even better, a student who had already mastered a particular concept could be paired with one who was struggling. Or two students, stuck in the same place, could work together to get past their common hurdle. In all of these instances, the clear emphasis would be on quality, helping interactions.

  Before leaving this account of the Peninsula Bridge experience, I’d like to mention one anecdotal outcome that I found particularly interesting and hopeful. In the traditional model of education we’ve inherited from the Prussians, students are moved together in cohorts. Because it appears that—in a traditional classroom—the spread between the fastest and slowest students grows over time, putting them all in one class cohort eventually makes it exceedingly difficult to avoid either completely boring the fast students or completely losing the slow ones. Most school systems address this by “tracking” students. This means putting the “fastest” students in “advanced” or “gifted” classes, the average students in “average” classes, and the slowest students into “remedial” classes. It seems logical… except for the fact that it creates a somewhat permanent intellectual and social division between students.

  The assessments that decide the fates of these students can also be somewhat arbitrary in their timing and in what they say about the potential of the student. So I was very curious to see if there were any data from the camp that showed that if “slow” students have the opportunity to work at their own pace and build a strong foundation, they could become “advanced” or “fast.” I did a database query for students who, at the start of the program, seriously lagged their peers—and would therefore have probably been tracked “slow” by placement exams—but who then turned out to be among the top performers.

  In one class of only thirty students, I found three who had started the six-week program significantly below average and finished it significantly above average. (For the statistically minded, I measured this by comparing the number of concepts mastered by each student against the average number completed by the group, during the first and last weeks of the program. I then focused on those who at the start of camp were at least one standard deviation below the norm, and by the end were at least one standard deviation above it.) In plain English, what this admittedly tiny sample suggested was that fully 10 percent of the kids might have been tracked as slow, and treated accordingly, when they were fully capable of doing very well in math.

  There was one seventh-grade girl—I’ll call her Marcela—whose results were especially striking. At the start of camp, Marcela was among the least advanced of the students, and during the first half of the summer session her progress was among the slowest; she was working through roughly half as many concepts as the average student. In particular, she was spending an inordinate amount of time wrestling with the concepts of adding and subtracting negative numbers; she was about as stuck as stuck could get. Then something clicked. I don’t know exactly how it happened, and neither did her classroom teacher; that’s part of the wonderful mystery of human intelligence. She had one of those Aha moments, and from then on she progressed faster than nearly anyone else in the class. At the end of the program, she was the second most advanced of all the students. Moreover, she was showing mathematical intuitions that hinted at a genuine gift; she ended up breezing through complex topics that most of her peers—even the ones that thought they were “good” at math—struggled with.

  At the close of camp, we held a little awards ceremony. I had the pleasure of presenting prizes to a few of the kids, Marcela among them. She was very shy and—until that summer—very short on confidence, and when I told her she’d become a rock star, she managed just the smallest smile and a quick nod. That was more than enough to make my day.

  Fun and Games

  In terms of my own learning curve in the realities of education, the Peninsula Bridge experience was both thrilling and liberating. When I was recording the video lessons to post on YouTube, remember, I was sitting all by myself in a glorified closet. Now I was dealing with flesh-and-blood kids whom I enjoyed and rooted for, and classroom teachers whose wisdom and commitment I greatly admired. My appetite for camps and classrooms had been whetted, and during the next couple of summers, starting in 2009, I codesigned and co-ran, with an aerospace engineer named Aragon Burlingham, what I thought of as an experiment in hands-on learning. As I still had my hedge fund job during the first of these summers, I used almost all my vacation time to be at the camp, and I didn’t mind at all. I was having a blast.

  As I hope is clear by now, it was never my vision that watching computer videos and working out problems should comprise a kid’s entire education. Quite the contrary. My hope was to make education more efficient, to help kids master basic concepts in fewer hours so that more time would be left for other kinds of learning. Learning by doing. Learning by having productive, mind-expanding fun. Call it stealth learning. Summer camp seemed a perfect testing ground for these other aspects of education.

  Our camps were therefore largely built with an emphasis on real projects that would in turn illustrate underlying principles. If that sounds a little dry and abstract, let me bring it home with a vivid example. Much of our time at the camps was spent in building robots. In one project, students were tasked with designing—using programmable Legos with sophisticated touch, light, and infrared sensors—tabletop sumo wrestlers. These robots needed to detect their opponent robot (or robots) and push them off the table. It was a simple game with open-ended opportunities for complexity.

  Some students built smart and nimble robots that tried to trick their opponents into driving themselves off the tables. Others optimized for traction or torque. Most importantly, the kids repeatedly built, tested, and refined their unique concepts.

  Another camp activity that proved fertile ground for learning was a variation on the familiar board game Risk. We played a variation called “Paranoia Risk” with the wrinkle that each player could only win by specifically eliminating one oth
er randomly assigned player. You knew who you were supposed to destroy; you didn’t know who was trying to destroy you. Hence the paranoia. You had to infer the malice from the other players’ actions. Then you had to decide when it was better to simply pursue your own immediate interests, as opposed to playing defense vis-à-vis your predator or offense against your prey.

  While the six players were implicitly learning about psychology, game theory, and probability directly through the game, the other twenty students were trading on the outcome, and thereby understanding how information and emotion drives markets. Each of the nonplayers was given $500 in fake money and six pieces of colored construction paper—one for each color of the players on the gameboard—at the start of the game. The rule was that the paper representing an eliminated player would be worth zero, while the paper representing a winning competitor would be worth $100. So, as you would expect, the price of each player’s “stock” went up or down in accordance with the ebb and flow of the game; if someone was willing to pay $60 for the red paper, he was telling the market that he believed red had a 60 percent chance of winning (60 percent × $100 = $60). Without knowing it, students were gaining deep intuitions about probability, expected value, and modeling unpredictable phenomena. At one point, a few “securities” were trading above $100—more than they could possibly return. This was a great discussion point after the game when we talked about “irrational exuberance.”

 

‹ Prev