Relentless Pursuit
Page 25
The new recruiting campaign worked. In 2000, the number of Teach For America applicants was 4,100. Five years later, 17,000 people applied. Many factors accounted for the fourfold jump. But there was little doubt that a more sophisticated and focused recruitment and marketing strategy had successfully changed the way TFA was perceived by a new generation.
TFA remained determined to get better at what it does. The organization began to think about how to identify its best teachers, and what it was that distinguished them from their peers. Obviously, TFA believed that a teacher’s value in the classroom was not based on the number of years of service or academic certification; it was determined solely by student outcomes. Kopp’s conviction that the teachers getting the best student results were also great leaders became the centerpiece of TFA’s approach to selection and teacher training.
TFA moved to both define and measure success. Endless hours of study and debate went into the process. In the end, it was decided that because lower-performing students were so far behind their wealthier counterparts, “significant gains” in academic performance would be required for them to catch up. At the elementary school level, significant gains were defined as a class average jump of at least one and a half grade levels in math and literacy, or two grade levels in math and/or literacy. For secondary content areas, significant gains would mean a class average of at least 80 percent mastery of the subject.
The definition of success in the classroom had a powerful impact across the entire organization. Previously, it was left largely to corps members to decide how best to motivate and move their students. CMs naturally poured energy into those areas that interested them most. Some concentrated on community-service projects, others on mentoring or after-school activities. With the introduction of “significant gains,” the collective energy of the entire organization was directed at the same single goal: specific academic gains in student achievement. Within a very short time, worthy but off-the-mark CM activities like painting over graffiti gave way to very purposeful, relentless work aimed solely at lifting academic achievement.
The TFA summer institute was reimagined. Before the creation of the Teaching as Leadership (TAL) framework, the training text consisted of a 150-page binder containing teaching strategies, articles, and learning theories. Once TAL took hold, it began to inform every facet of training and support. In 2001, a separate TAL text was written for pre-institute prep work; the following year a special course on TAL was added to institute training itself.
The development of the TAL principles was an iterative process that was refined as the data coming in about the distinguishing characteristics of the most successful CMs became more robust. TFA also relied heavily on twice-yearly corps member surveys for insight into satisfaction levels regarding training and support. The two sets of data gave TFA a rich soup of information that helped shape ongoing improvements to the program. By the summer of 2006, analysis of the data resulted in the expansion of the TAL rubric to six principles with twenty-eight related actions.
And that year, in a major leap forward in its drive to improve training, teacher development, and student achievement, TFA began for the first time to track CM on-the-job performance as it specifically related to each of the twenty-eight TAL actions. Now TFA could take the CMs whose students were making significant gains and look at the TAL rubric data to see where on the twenty-eight teacher actions that group was performing. Analysis of the data could yield more insights into the distinguishing characteristics of great teachers, which could potentially lead to the discovery of new selection profiles and further refinements to the training and teacher development programs. “It’s like someone turned the light on in terms of shaping our training and support,” said Steven Farr, TFA’s vice president of knowledge development and public engagement, a 1993 alum and graduate of Yale Law School. “It’s a brave new world.”
At the same time, rather than teach TAL as a separate course, program designers were working at incorporating that overarching principle into every facet of teacher training. Incoming corps members would be provided with a TAL textbook to establish mind-set before arriving at summer institute. Once there, they would be immersed in the nuts and bolts of teaching.
Afterward, corps members could refer to an interactive website featuring an online TAL textbook, a how-to guide providing new teachers with immediate access to the basics of good teaching. By 2008, TALON, as it was called, was to become a virtual coach to CMs, not only showing with annotated illustrations what works in the classroom, but teaching through online interactions how to adapt those practices and continuously improve. In addition, TFA was planning electronic resource centers for teachers through its online system called TFAnet, which would establish communities for CMs to share ideas and get advice from experts.
Despite the constant push to improve selection, training, and support, identifying its best teachers and accurately measuring significant gains in student achievement remain an imperfect science. Under No Child Left Behind, states are permitted to set their own standards for proficiency, creating wide variances across the country in terms of both the quality and the rigor of curriculum and assessments. A recent Harvard study of student achievement in New York City indicated that TFA’s measure of significant gains was reliable, but even by the organization’s own reckoning, it was “very messy and unscientific,” as TFA’s vice president of research, Abigail Smith, conceded in 2006. “We work as hard as we can to norm across regions, but we recognize…the apples and oranges challenge.” TFA remains underterred. “We can’t let ‘perfect’ be the enemy of ‘good,’” insists Farr. “We have got to go with our best hypothesis and move forward.”
The reliability of most educational research has always been questionable, with the value of each reform or program seen through the eyes—and innate biases—of the beholder. In 2002, the federal government passed a law establishing the Institute of Education Sciences to foster “scientifically based,” federally funded research on which to ground education practice and policy. But the budget for research was small, and the What Works Clearinghouse, set up to review educational research, was quickly dubbed the Nothing Works Clearinghouse, since so few studies reviewed met the rigorous methodological standards set by the government.
Still, the finding that the quality of teaching has the single most profound effect on a child’s academic growth is generally accepted as gospel. What is also uncontested is the fact that the amount of research into teacher education and preparation as it relates to student outcomes is “relatively small and inconclusive,” according to the American Educational Research Association’s 2005 report on research and teacher education, entitled “Studying Teacher Education.” Several external studies of TFA’s effectiveness in recent years have reached differing conclusions. The largest study, published by the Center for Research on Education Outcomes at the Hoover Institute, Stanford University, in 2001, looked at student outcomes in Houston public schools and compared TFA results with those of other teachers; it found that the TFAers “perform as well as, and in many cases better than, other teachers hired [by the district].”
A much smaller study by Arizona State University researchers of the Phoenix Public Schools, released the following year, came to the opposite conclusion: it reported that students of TFA teachers “did not perform significantly different from students of other undercertified teachers” and that the students of certified teachers outperformed students of undercertified teachers, regardless of the pathway. TFA objected to the methodology and conclusions of the study, and was frustrated that the media gave it more weight than the organization felt it deserved.
Mathematica Policy Research, Inc., an independent research firm that evaluates socioeconomic issues driving public policy, released its study of Teach For America’s effectiveness in 2004. Using a random assignment design, considered the most scientifically desirable—and expensive—method of research, Mathematica compared TFAers with both new and veteran teachers of nea
rly two thousand students in one hundred first-to fifth-grade classrooms in six of TFA’s fifteen regions. The study found that TFA teachers, though lacking traditional teacher training, generated larger math gains than their peers and had the same minimal impact on reading. Mathematica’s conclusion: TFA offered an appealing pool of “academically talented teachers” who contributed to the academic achievement of their students. Employing a TFA recruit amounted to a risk-free hire.
Teach For America trumpeted the Mathematica study results on its website and in national press releases. TFA fans and foes alike lauded the study’s design, widely regarded as the gold standard in research. But the following spring, Linda Darling-Hammond struck again. In a paper entitled “Does Teacher Preparation Matter?” Darling-Hammond reviewed achievement data among Houston’s fourth-and fifth-graders over a six-year span and concluded that uncertified TFA teachers had a significant negative effect on student gains relative to certified teachers. This time, TFA vigorously defended the effectiveness of its program and attacked the rigor of Darling-Hammond’s methods, through the media and on its website.
Wendy Kopp, writing in the The Stanford Daily, scolded the paper for its coverage of what she believed was a flawed study by Darling-Hammond, who had moved to Stanford University’s School of Education in 1998. TFA’s Abigail Smith concluded that Darling-Hammond had an “inexplicable, twelve-year vendetta against Teach For America.” The Stanford professor stood by her work, calling TFA a “Band-aid on a bleeding sore.” The educational establishment was abuzz. But TFA funders were unmoved by the public tit for tat. So, too, were potential candidates: applications soared to an all-time high in 2005.
Internally, the collection and analysis of data continued to fuel the TFA engine. For selection, the idea was to be as accurate as possible on the front end to ensure the very best results at the back end in terms of student outcomes. By 2005, the data from preceding TFA classes on both admissions and CM effectiveness was rich enough for the organization to begin making fairly accurate computer predictions about which candidates were most likely to succeed—rendering unexpected defections like Dave Buehrle’s all the more important to study.
TFA’s predictive selection model had identified six profiles across the seven competencies that were used to judge potential candidates. A candidate had to be rated at least a 2 (solid) on every competency even to be considered. After that, candidates had to either “spike”—that is, score a 3 (exemplary)—in achievement alone, a very high bar to meet, or spike in two of four other competencies: perserverance, critical thinking, influence/motivating, or organizational ability. (While the two other traits, “respect” and “fit with TFA,” were valued, spikes in either or both were not enough on their own to green-light admission.)
Wendy Kopp understood the selectivity piece from the very beginning. In the inaugural corps, only one in five applicants made the cut. TFA’s admission policy remained highly selective throughout the first decade, even though the number of applicants averaged only 2,500 a year. In gearing up for growth in the second decade, TFA decided it would expand only if it could maintain, and even exceed, its already high level of selectivity. Though the numbers of applicants were running to five figures by 2002, TFA didn’t accept 2,000 corps members until 2005, when 17,000 people applied. The decision to remain highly selective was key to the success that followed.
In 2005, Jim Collins singled out Teach For America for its ability to get the right people, that is, top-flight talent, onto the proverbial bus without relying on money as an incentive. In the Social Sectors monograph to Good to Great, he called TFA an “elegant” idea, noting that TFA was able to attract America’s elite students to its movement by appealing to their “idealistic passions” and making the process selective. That selectivity led to “credibility with donors, which increased funding, which made it possible to attract and select even more young people into the program,” wrote Collins. He said that Kopp understood three fundamental points: (1) the more selective the process, the more attractive the position becomes; (2) purity of mission is a powerful motivator; and (3) the number one resource is having enough of the right people committed to the mission.
To ensure the success of the first five-year growth plan, TFA worked on attracting top-quality staffers to grow its organizational capacity and deepen its bench. Staff salaries rose—along with the bar for performance. TFA head-hunted aggressively among its own pool of alumni and their contacts. It was looking for ambitious, goal-seeking staffers, and it rewarded the ones it hired with lots of responsibility. For the underperforming, it was not a comfortable place to work. Many of them self-selected out; others were pointed toward the door or dismissed. Most nonprofits had a greater tolerance for underperformers, a tendency that made frustrated goal-oriented staffers bolt. The staff retention rate at TFA was more differentiated. Staffers, like CMs, were constantly evaluated. The idea was to retain the highest number of top performers and the lowest number of nonperformers. Over 90 percent of the top performers stayed at TFA; the retention rate for the less successful was more like 20 to 30 percent.
Growing its funding base was a key goal in the 2000 five-year plan. TFA wanted to be fully diversified so that it would never again be dependent upon a single revenue stream for its growth—or survival. It approached the development challenge the same way it did every other part of the mission—by setting ambitious targets and doggedly pursuing them.
Kevin Huffman, a 1992 corps member in Houston along with KIPP founders Levin and Feinberg, took on the task of heading development just a few months after speaking on a panel during the tenth anniversary alumni summit in New York. At the time, he had a well-paying job at a prestigious law firm, but being around the old TFA gang again precipitated a full-blown crisis of conscience. Huffman decided he had to get back into the nonprofit world. When Kopp got wind of that, she invited him to rejoin TFA. On his thirtieth birthday, in August 2000, he left his lucrative job with Hogan & Hartson and returned to the TFA fold.
Huffman had no experience in fund-raising, but he saw right away that there were changes in structure and culture that could significantly increase revenues. TFA’s regional sites clearly represented a largely untapped source of diversified funding that the organization moved aggressively to exploit. The multiregional setup allowed TFA to figure out the best development practices, quickly share them with other sites, and then execute them. Goals were set, and a rigorous central tracking system built around a high level of skepticism was put in place. Regions were required to do a monthly check-in with the national office to categorize the likelihood that pledges would actually be delivered. The check-ins resulted in brutally honest assessments—both on the reliability of pledges and the proficiency of the fund-raising.
Restructuring the fund-raising effort was easy. The cultural piece was a bit tougher. The focus within the organization had always been on the program—how to build it, improve it, expand it. Fund-raising was seen as a necessary evil, a dirty task that needed to be undertaken in order to do the things that really mattered. Under the five-year plan, fund-raising came to be seen as part and parcel of the mission, an endeavor of elemental importance.
TFA enlisted new funders to be on the ground floor of the expansion at their regional sites. And it began to appreciate the importance of synergy between the private and public sectors. It started to invest more in building personal relationships—especially in Washington, D.C., home to policy makers and federal dollars. It didn’t hurt that the new president, like Kopp, was a Texan. During his campaign, George W. Bush had flown Kopp cross-country on his plane to discuss Teach For America. When he took office in 2001, he named Ron Paige, superintendent of the Houston Independent School District, secretary of education. Paige had had a long and happy history with TFA in Houston, and he viewed it as a catalytic force in public education.
“We were still a relatively small nonprofit,” recalls Huffman. “We were national in scale but probably not that well known, and all of
a sudden we had people in D.C. who thought we were great.”
As TFA was figuring out how to engage the power brokers in Washington, it was equally mindful of Wall Street. In 2002 its first national corporate sponsorship fell into its lap when Wachovia Corporation approached TFA to partner up. National corporate partnerships with Lehman Brothers and Amgen followed. In 2002, TFA’s annual New York City benefit dinner raised $860,000. Five years later, it raised more than $4 million.
TFA also tapped John Q. Public through Sponsor a Teacher, and continued to seek funding through foundations. The Broad Foundation, the Carnegie Foundation, the Knight Foundation, and New Profit, Inc. were among a dozen or so philanthropies that joined the Pisces Foundation in underwriting the 2000–2005 expansion. By the end of the 2005 fiscal year, operating revenue had grown from $10 million in 2000 to $40 million. Amazon.com named Teach For America one of the country’s ten most innovative nonprofits, and the organization received Charity Navigator’s highest rating for sound fiscal management.
As the national team worked feverishly to improve the program, the regions worked equally hard executing. Samir and the other program directors were at school sites every day of the week except for Tuesdays. The second day of every workweek was spent in the downtown office, where all the PDs met with managing director of program Felicia Cuesta to assess their work as thought partners to recruits, to share best practices, and to plan ahead for the other programmatic roles they were assigned.
Samir always approached Tuesdays with mixed feelings.