Beer and Circus

Home > Other > Beer and Circus > Page 10
Beer and Circus Page 10

by Murray Sperber


  Within this context—the school’s neglect of general undergraduate education—Buffalo’s move to big-time college sports makes sense. UB’s situation is typical of many universities: because they cannot provide their undergraduates with an adequate education, but they need their tuition dollars, they hope to improve “the quality of student life” on their campus, in other words, bring on the beer-and-circus.

  At Buffalo, because of the losing teams, the move to big-time college sports has failed up to now; nonetheless, at many schools with successful teams, beer-and-circus rules, and the student happiness level rises. Buffalo is using a paradigm that has succeeded elsewhere but, because of the demography of college sports recruiting, probably will never work for this school. Indeed, with consistently losing teams, UB might generate an anti-Flutie effect, an image as a “loser school” with declining enrollments. UB professor William Fischer worries that this “negative halo effect,” along with a decade of state funding cutbacks, will further “degrade” his school.

  In contrast, Florida State University has achieved an almost permanent Flutie Factor. With fertile southern high school football fields to harvest, the Seminoles are always near or at the top of the national football polls, as well as the “party school” lists (it held its high ranking in the Princeton Review’s “Party school” list throughout the 1990s). But FSU also rates very low in quality of undergraduate education. In addition, as a university with research ambitions, Florida State officials have poured millions into its research and graduate programs. This school is the current national champion in college football, and a prime example of an institution that provides its students with beer-and-circus and not much undergraduate education. If a beer-and-circus poll existed, FSU would be the national champ.

  The next section of the book details the neglect of undergraduate education and the entrenchment of beer-and-circus. To understand this phenomenon, one must first examine the finances of higher education in the final decades of the twentieth century, and the inability of university leaders to confront the new economic reality, while at the same time pursuing research prestige for their institutions. With this framework in place, the role of beer-and-circus in the contemporary research university becomes clear.

  PART TWO

  COLLEGE LITE: LESS EDUCATIONALLY FILLING

  7

  SHAFT THE UNDERGRADUATES

  In an influential early-1960s book The Uses of the University, Clark Kerr, the president of the University of California system, contrasted the established research university, for example, Harvard, Yale, and his campus at Berkeley, with newer schools striving for research prestige. He noted that “the mark of a university ‘on the make’ is a mad scramble for football stars and professorial luminaries. The former do little studying and the latter little teaching, and so they form a neat combination of muscle and intellect” that keeps the faculty and the collegiate students happy. In addition, the administrators who create this conjunction between football and faculty stars do well: they bring fame and fortune to their schools and enhance their jobs.

  Kerr described the beginnings of a phenomenon that, because of the turmoil in higher education from the mid-1960s to the early 1970s, was temporarily put on hold. But his vision of universities “on the make” and their use of intercollegiate athletics as campus and public entertainment started to come true in the mid-1970s. He also foresaw an “inevitable” side-effect: “a superior [research] faculty results in an inferior concern for undergraduate teaching.”

  This section of Beer and Circus focuses on this phenomenon: universities striving for research fame, neglecting undergraduate education, and promoting their college sports franchises.

  Table 1 lists the universities in the 1906 ranking, matched against the order of the top 15 in 1982. These listings demonstrate that a reputation once attained usually keeps on drawing faculty members and resources that sustain the reputation … .

  Over the nearly 80 years from 1906 to 1982, only three institutions dropped out from those ranked as the top 15—but in each case not very much … and only three were added.

  —Clark Kerr, University of California president emeritus

  Clark Kerr went from the University of California to head the Carnegie Foundation for the Advancement of Teaching and, in 1991, he published an important article comparing the rankings of the top fifteen research universities in 1906 with those in 1982. Considering the momentous changes in higher education during that time span, his findings were unexpected but, after analysis, were entirely logical: the rich arrived first and stayed on top, and no matter what the rest did, they could never overtake these institutions. In 1906, the early period of university research and graduate schools, Ivy League universities dominated the top-fifteen list, and almost eighty years later, they continued to prevail. Similarly, the first private, non-Ivies that emphasized research and graduate education—Johns Hopkins, Chicago, MIT, and Stanford—were still in the top fifteen, as were the first public universities that embraced research and PhD programs—Berkeley, Michigan, and Wisconsin. Predictably, at the beginning of the twenty-first century, almost all of these schools remain in the top echelon, with only Duke and Cal Tech now consistently joining them.

  Kerr titled his article, “The New Race to Be Harvard or Berkeley or Stanford,” and he began, “All 2,400 non-specialized institutions of higher learning in the United States aspire to higher things. These aspirations are particularly intense among the approximately 200 research and other doctorate-granting universities.” He then demonstrated that this race was a fool’s errand for almost all participants. Additionally, it had negative side-effects for all schools, including the winners: the emphasis on research devalued undergraduate education, and “the regrettably low status of teaching in higher education provides faculty members less reward from that activity than they expect to gain from heightened research” work.

  In his article, Kerr also discussed the phenomenon of “Upward Drift”: those universities, whether they could afford the cost or not, that relentlessly added graduate and doctoral programs in order to compete in the research prestige race. Moreover, administrators of Upward Drift schools chose this course of action during a time of economic difficulties for higher education: in the 1970s and 1980s, with the end of the baby boom, tuition revenue dropped; also, state legislators and taxpayers, disillusioned with most public agencies, drastically cut funding to higher education; and inflation squeezed every school’s financial resources. But Upward Drift continued.

  With diminished revenue, most schools had to make choices. Only the richest universities could afford to maintain high-powered graduate schools and quality undergraduate education programs. Some small private colleges that had started graduate programs during flush times cut them, concentrating their resources on undergraduate education. Upward Drift universities made the opposite choice: they put scarce dollars into their graduate schools and neglected undergraduate education. A 1990s study explained that the pursuit of research fame and prestige were the “potent drivers of institutional direction and decision-making” at Upward Drift U’s. The study also indicated that these schools continued this policy in the 1990s, despite “much talk on campuses about downsizing and concentrating on the core business of undergraduate teaching.”

  In 1973, Clark Kerr created a classification system for higher education that also provided a way to measure Upward Drift. His top category, “Research Universities I,” consisted of those institutions granting at least fifty PhD’s per year, giving a “high priority to research,” and meeting various other criteria. The established research universities dominated the group, but, in the next two decades, a number of schools joined them. Significantly, almost all of the new members of Research Universities I also belonged to NCAA Division I, for example, Arizona State, Florida State, Kansas, Kentucky, Louisiana State, Nebraska, Temple, UConn, UMass, Virginia Tech, and West Virginia. However, even though these schools frequently had top-twenty college sports teams, n
one of them ever broke into the top fifty on the standard rankings of national universities. But all of these universities changed the nature of their institutions: as the authors of the Upward Drift study indicated, “Despite pressures to emphasize the role of undergraduate education, ambitious institutions” were and are “beguiled by the promise of prestige associated with doctorate-level education.” These universities spent, and continue to spend, enormous sums of money on their graduate departments, and much less proportionally on undergraduate teaching.

  Upward Drift also involved schools moving up to “Research Universities II” (fewer doctoral programs than in RU-I, but still committed to graduate education). Among the new arrivals in II were Houston, Mississippi, Ohio U, Rhode Island, South Carolina, Texas Tech, and Wyoming—all members of NCAA Division I but, predictably, trailing their wealthier siblings in that field as well. Upward Drift continued in the lower categories—Doctorate-granting Universities I and II (smaller graduate programs and fewer PhD’s per year)—and included many schools near the bottom of NCAA Division I trying to climb the research and athletic polls. Again, none of these universities ever made the top-fifty rankings of national universities, but they all chose to participate in the research game—even though, before the 1970s, some were liberal arts colleges doing a good job of educating undergraduates.

  The universities sitting on top of the research polls throughout the twentieth century have always dictated the rules of the game. The result, according to one critic, “is a monolithic status system that pervades all of higher education, a system which places an inappropriate value on so-called ‘pure’ research and on the national reputation for the person [the professor] and the institution that this research can bring.” Since the 1970s, the administrators of almost all universities have endorsed this “monolithic status system”—whether suitable to their particular campus or not—believing that research prestige was the way to attract attention to their institution and to improve its standing in the academic world.

  For an Upward Drift school to move higher in the prestige polls, it has to pass a more established research institution. But higher-ranked schools are not standing still or drifting downward; in fact, they work hard to improve their positions in the polls. For example, the University of Illinois at Champaign-Urbana, with very tight budgets throughout the 1980s and 1990s, continued to pour millions into its graduate programs and to neglect its undergraduate ones. An editor of the University of Illinois student newspaper described the state of her campus in the late 1980s: “It’s clear that all the money is going to research. It seems so blatant when you see the run-down English [and other classroom] buildings and the fancy new research buildings. The U of I is really a research park that allows undergraduates to hang around as long as they don’t get in the way.”

  At Rutgers University we have spent the past fifteen years [from the mid-1970s through the 1980s] successfully competing both for talented junior faculty [researchers] and for world-class scholars by promising them minimal teaching schedules. I know of junior colleagues who have been on the faculty roster for two years and have scarcely seen the inside of a classroom.

  —Benjamin Barber, Rutgers professor

  Schools try to ascend the academic polls by accumulating faculty who possess or will achieve research fame. Rutgers, the main public university in New Jersey, provides an example of a university “on the make” for research prestige. In the 1970s and 1980s, it aggressively tried to move up in the academic research world (it also entered big-time college sports at this time), but, for all of its efforts, as well as some success in faculty hiring, it never managed to break into the top-fifty rankings of national universities in the U.S. News poll (or the top twenty in the sports polls). Moreover, as Rutgers anthropologist Michael Moffat documented in a 1980s book, general undergraduate education at the school was abysmal and deteriorating.

  Professor Barber also related an anecdote about an “Ivy League university, disturbed by the disrepute into which teaching had fallen, [that] recently offered its faculty a teaching prize. The reward? A course off the following year!” Amazingly, other schools offered similar bonuses as part of their teaching awards. These stories, as well as the Rutgers tale, spotlighted the faculty’s role in the deterioration of undergraduate education during the era of Upward Drift.

  Trained in the old and the new graduate programs, most professors come from the ranks of academically inclined undergraduates, and exhibit the traditional professorial distaste for teaching large numbers of collegiates and vocationals. Only the faculty’s academic “children” and some rebels were worthy of their time—but not too much of it. In a 1980s study, the Carnegie Foundation determined that at research universities, only 9 percent of the faculty spent more than eleven hours a week teaching undergraduates, whereas 65 percent logged less than ten hours a week in this endeavor, and 26 percent spent zero hours on undergraduate teaching (two decades later, there is even less classroom contact between faculty and undergraduates, particularly between faculty and nonhonors students).

  Yet, the Carnegie investigators found that faculty members were busy with their research, a majority devoting more than twenty hours a week to it, and many over forty hours per week. Professors sometimes criticized their school’s “publish-or-perish” syndrome, but they participated in it, usually quite willingly. Their language revealed their priorities: faculty referred to their “teaching loads,” as if pedagogy were a burden—at a time when most research universities established two-courses-per-semester as the standard teaching assignment for a faculty member, that is to say, six hours per week in class (however, at least one-third of all professors managed to spend fewer hours in a classroom, sometimes none at all). Faculty also talked about “research opportunities”—those bright, shiny projects and grants to live and die for. Moreover, when professors discussed their “own work,” they never meant their teaching, only their research.

  In America, because money measures the value of work, universities send clear signals with their pay scales. Before the 1970s, a few star professors received more money and perks than their colleagues; however, most faculty salaries were uniformly low but equitable, with years in rank as the main criteria. Upward Drift and the tight financial budgets of the 1970s and 1980s created a new pay scale: universities generously rewarded all professors who furthered the institution’s research goals, and they gave the rest of the faculty—no matter how excellent their teaching—minimal raises. Similarly, they rewarded “productive faculty,” a.k.a. researchers, with such perks as personal research accounts, extended paid leaves to do research, and fewer, if any, undergraduate courses. Only faculty who became full-time administrators continued to climb the salary ladder, but not with the same speed as the outstanding researchers.

  In addition, in promotion and tenure decisions, universities emphasized research achievements and potential to a greater extent than previously; if a candidate was an ordinary researcher but an outstanding teacher, his or her chances for promotion and tenure were slim to none. The research imperative drove the reward system, but American business culture, notably its obsession with quantitative measurements and numbers, influenced the process. University administrators and committees could count a faculty member’s publications; however, they could not evaluate teaching in any numerical way (even quantitative student evaluations were and are unreliable because of instructor manipulation and student subjectivity). Most important, research built a faculty member’s reputation outside the institution and reflected back upon the school, enhancing its reputation; whereas the fame of even a superb undergraduate teacher rarely extended beyond campus boundaries and made almost no impact on the national ranking of the university.

  Faculty members at research universities have long divided their loyalties between their professional disciplines (their academic fields, societies, meetings, and colleagues throughout the world) and their home universities. In the 1960s, sociologist Burton Clark described those professors im
mersed in the world of their disciplines as “cosmopolitans,” and faculty mainly involved in their teaching and other duties on their particular campuses as “locals.” Before the 1970s, most universities had a healthy percentage of “locals”; by the 1980s, when the Carnegie Foundation measured the percentage of locals versus cosmopolitans, it discovered that at research universities, only 21 percent of the faculty felt that their school was “very important” to them, whereas 79 percent considered their professional discipline as “very important,” the center of their academic lives. (A generation later, the percentage of “locals” probably has dropped to single digits at many institutions, with the Internet enabling cosmopolitans to remain based at, but permanently apart from, their schools.)

  The decline in faculty loyalty to their home institutions also reflected universities’ decrease in loyalty to them, particularly the failure of schools to reward their “locals,” usually their best undergraduate teachers, with salary increases and promotions. The signal, especially to young faculty, was unequivocal: to gain rewards from a university, be a “cosmopolitan” researcher. And well-traveled “cosmopolitans” began to consider the university as “a place to hang one’s hat” until they accepted a better offer from another institution.

 

‹ Prev