Book Read Free

American Experiment

Page 301

by James Macgregor Burns


  It would take rare leadership to overcome their ignorance and alienation, to attract them to the polls, to enable them to vote their deepest, most authentic, and abiding needs. Aside from Jesse Jackson, who demonstrated a remarkable talent for mobilizing low-income blacks and whites in his 1988 presidential primary campaign, this kind of leadership was missing, at least on the left, in the America of the late 1980s. Such leadership could not be manufactured—it emerged out of a people’s heritage, values, aspirations, and took its color and energy from conflict. Great leadership historically had never been possible except in conditions of ideological battle. Such conflict was not in sight in a nation whose liberal leaders, or aspirants to leadership, appeared wholly content with a politics of moderation, centrism, and consensus.

  A Rebirth of Leadership?

  As the 1988 presidential nominating races got underway more than a year before the first primary, it appeared unlikely that one election could break open the party and the constitutional gridlock that gripped the American political system. From the very start the presidential candidates were entangled in one of the worst leadership recruitment systems in the Western world. The presidential primary not only pitted them against fellow party leaders in endless and acrimonious combat; it forced them to mobilize personal followings that after the combat might persist as dispersive and even destructive forces within the parties and within their Administration. It was not surprising that the governor of a large state, such as New York’s Mario Cuomo, would reject this process whereas fifty-six years earlier Franklin D. Roosevelt had found it possible to perform as governor and to campaign for the presidency in the much less demanding nominating procedure of 1932. How defensible was a selection process that choked off the recruitment of some of the best and busiest leaders?

  In other Western democracies the parties served not only as recruiting agencies for leaders but as training grounds for leadership. Party mentors identified, coached, and promoted promising young men and women—sometimes in actual party schools. By and large, the more doctrinal the party, the more effective its recruitment and training programs. It was only when the GOP became a more ideological party that it “engaged in an extensive program of political education for legislative candidates and their managers,” John F. Bibby reported. But this effort was exceptional; in this century American parties have been too flaccid, underfinanced, and fragmented to serve as schools of leadership. Other sectors of American society, among them corporations, the military, and government agencies, taught forms of leadership, but these were specialized programs that served the purposes of the organization rather than the broader needs of the general public.

  Typically, Americans were trained to be effective politicians—good brokers, manipulators, money raisers, vote winners. Since the very nature of the governmental system, with its rival branches and complex dispersion of power, put a premium on transactional leadership, American politics offered endless play to lawyers and other negotiators and mediators. The system would long ago have collapsed without their capacity to grease the machinery at the connecting points. But what if the machinery was failing anyway? Appeals for creative, transforming leadership were frequent in the 1980s but vain. Such leadership could not be summoned like spirits from the vasty deep.

  When political leaders fail, Americans often turn to the next most available saviors or scapegoats—the educators. The era from Vietnam and Watergate to Iran-Contra generated even more than the usual calls for reforming or revolutionizing the nation’s secondary schools and, even more, its colleges and universities. Most of the proposals, dusted off for the latest crisis, embodied the special ideological, professional, or career interests of the reformers—more cross-disciplinary studies, more emphasis on reading the classic writings of the great philosophers from Plato on, strengthening the liberal arts curriculum, and the like. There was much emphasis in the 1980s on teaching and the taught, but significantly less on the teachers. Few of the reformers appeared to comprehend that teachers were the people’s first and most influential set of leaders, as role models, opinion shapers, inspirers, disciplinarians, embodiments of ongoing middle-class, ethnic, and political traditions.

  If the public had recognized the central importance of teachers, perhaps proposed reforms would have focused more on these human beings. Or perhaps not, because “reforming” the human beings would have appeared far more difficult and dangerous than manipulating processes or techniques. Still, the quality of the teachers—their competence, breadth of knowledge, intellectual vigor, commitment to the classroom, and professionalism—was far more important than their specific mode of teaching, set of readings, or place in the curriculum.

  This centrality of the teacher made all the more crucial and ominous a finding during the “educational crisis” of the 1980s that received little attention at the time compared with the headlines dwelling on superficialities. From a random sample of 2,500 Phi Beta Kappa members and of almost 2,000 Rhodes scholars, Howard R. Bowen and Jack H. Schuster concluded in 1985 that fewer and fewer of the nation’s most intellectually promising young people were entering or planning careers in higher education. This finding had the direst implications for the quality of the best kind of teaching as leadership for the half century ahead; and, as the analysis concluded, it was also significant that “the academy, it seems, grows less and less attractive as a house of intellect, as a nurturing and stimulating environment for the gifted and creative.”

  This finding was widely ignored, perhaps because by implication it called for the most prodigious effort to draw the truly best and brightest of the nation’s youth into teaching. The Bowen-Schuster report noted, as had so many earlier findings, that the “quality of working conditions for faculty also has deteriorated markedly over the past decade and a half; less clerical support, overcrowded facilities, outmoded instrumentation, tighter library budgets, and poorly prepared students.” No improvement was expected for another decade or so. To overcome these deficiencies in public higher education would call for the kind of clear goals, dependable funding, long-range planning, firm commitment, steady policy making, and persistent follow-through that were so uncommon in American government.

  What should good teachers teach? Not what to think but how to think— that is, how to think across a wide span of disciplines, values, institutions, and policies in a highly pluralistic, fragmented culture. Educators based their claim to priority ultimately on the proposition that the products of liberal arts or humanities programs, as exemplified by Rhodes scholars and Phi Betas, had shown such intellectual grasp of a variety of subjects as to equip them as political leaders to deal with the diverse and continually shifting problems they would face as leaders.

  But could any group—even an educational elite—cope with the combination of political fragmentation and intellectual disarray that threatened the American future?

  The intellectual disorder had manifested itself during the past half century in the loose collection of hazy ideas that passed as the American idea-system; in the flowery platitudes of candidates, whether about communism or the family or the deficit or poverty; in the once famous New York School of art that fractured into several New York schools and later into an endless succession of styles; in the hopes for a unified social science declining in the face of ever-multiplying subdisciplines and specializations; in the disintegration of the humanities into a “heap or jumble” that reminded Allan Bloom of the old Paris flea market.

  A century and a half ago Tocqueville had observed that science could be divided into three parts: the most abstract and theoretical principles; general truths derived from pure theory but leading “by a straight and short road to practical results”; and methods of application and execution. On the practical matters, he noted, “Americans always display a clear, free, original, and inventive power of mind,” but few concerned themselves with the theoretical and abstract. On the other hand, Tocqueville said, American orators and writers were given to speaking in the most infla
ted, grandiloquent style about vast topics.

  The American’s “ideas,” Tocqueville summed up, were either extremely minute and clear or extremely general and vague: “what lies between is a void.” The idea of freedom was his best example. It is the best example today of the “Tocquevillian void.”

  Of all the central ideas in the American experiment the concept of freedom had been the most glorious, compelling, and persistent—and also the most contrarily defined, trivialized, and debased. The Declaration of Independence of 1776 was essentially a paean to liberty, a term that has been long used as an equivalent to freedom. Eleven years later the Constitution would secure “the Blessings of Liberty to ourselves and our Posterity” and in 1791 the French Constitution, responding to the same Enlightenment values, incorporated a Declaration of Rights asserting that “men are born and live free and equal as regards their rights.” Within seventy-five years “freedom” had become so evocative, and yet so hazy, as to be invoked by Union soldiers “shouting the battle cry of freedom” against slavery, by Confederate troops “shouting the battle cry of freedom” against Yankee oppression, and by a black regiment singing, “We are going out of slavery; we’re bound for freedom’s light.” During the past century speakers and writers across the entire political spectrum, from American communists to the extreme right, have invoked the term. It was rare to hear a major speech by Reagan, or by the Democratic aspirants of 1988, that did not appeal to freedom or liberty or their equivalents. It was even rarer to hear them spell out what they meant, except in more banalities, shibboleths, and stereotypes.

  Did it matter that Tocqueville’s void still loomed toward the end of the twentieth century—that orators continued to “bloviate” and millions of men and women went about their minute, day-to-day decision-making with no linkage between the two? There would be no practical will to action, the philosopher Charles Frankel wrote, unless value judgments were made—and made explicit. If there was to be conversion of social theory into social action on a scale large enough to shape the whole society, a social philosophy that explored “the basic choices available” and offered “an ordered scheme of preferences for dealing with them” was indispensable.

  Any one of our animating ideas was complex enough—had to be complex to be so attractive to so many different minds. Liberty was the prime example. A word that appears on our coins, on the marble walls of public monuments like the Lincoln and Jefferson memorials, in virtually every stanza of the great national anthems, had to resonate appealingly through many classes, regions, and occupations. But what did it mean, as a guide to action? Only negative liberty—freedom from arbitrary regulation by public or private power wielders? Or also positive liberty—the freedom to take purposeful steps, often in social and economic areas, to realize one’s goals? Both freedoms could be left at first to the private sphere, but as society became more complex and interrelated, the two liberties increasingly impinged on each other and on the public realm. This happened most dramatically with slavery, and led to one of Lincoln’s wisest reflections. “The world has never had a good definition of the word liberty,” he declared in 1864, “and the American people, just now, are much in want of one. We all declare for liberty; but in using the same word we do not all mean the same thing. With some the word liberty may mean for each man to do as he pleases with himself, and the product of his labor; while with others the same word may mean for some men to do as they please with other men.…”

  Events expanded the concept of liberty, and further complicated it. Franklin Roosevelt not only took the lead in defending the Western democratic definition of freedom against Adolf Hitler’s perversion of it, but in proclaiming the Four Freedoms he nicely balanced the negative liberties of speech and religion from arbitrary public and private action against the positive liberties of national military security and personal economic security. Later, contending that “necessitous men are not free men,” he said, “We have accepted, so to speak, a second Bill of Rights under which a new basis of security and prosperity can be established for all—regardless of station, race, or creed.” The President then listed a set of positive economic rights that would constitute the agenda for liberal Democratic Administrations and candidacies in the years ahead.

  The struggle over negative liberty—personal protection against authority—attracted some of the most impressive intellectual leadership in the history of the nation. The philosophical heritage of individual liberty, the Jeffersonian and Lincolnian defenses of this supreme value, the fervent conservative vindication of property rights, the vigilance of the American Civil Liberties Union and like-minded groups, the presence on the High Court of justices with the commitment of Louis Brandeis, Harlan Stone, Felix Frankfurter, Hugo Black, William Douglas, Earl Warren, the zeal for civil liberties on the part of appellate judges such as Learned Hand of New York—all of these had variously combined to establish the federal judiciary as, on the whole, the prime definer as well as protector of civil liberties. The enunciation by the High Court during the 1940s of the “preferred position” doctrine, holding that First Amendment freedoms deserved the highest priority in the hierarchy of constitutional protections and presuming to be unconstitutional any law that on its face limited such freedoms, further insulated individual liberty against arbitrary interference.

  Still, civil libertarians could not be complacent as the Bill of Rights bicentennial neared. The judiciary’s record since the founding had been uneven. And when, in 1987, the Chief Justice of the United States, along with the latest Reagan appointee, joined in a minority vote to sustain the constitutionality of a Louisiana statute requiring the teaching in public schools of the creationist theory of human origin, civil libertarians had to assess the implications for the future of appointments by a series of conservative Presidents.

  “All men are created equal.” If the Court had helped fill Tocqueville’s void in the area of civil liberty, the same could not be said about the record of the nation’s intellectual and political leadership in meeting the flat commitment that Americans of 1776 had made to the principle of equality except for slaves and women. This failure was understandable in part because the realization of economic and social equality was intellectually an even more daunting venture than the protection of individual liberty. But even the most essential preliminary questions had not been answered: What kind of equality was the issue—political, social, economic, gender, racial, or other? Guaranteed by what private or public agency, if any? Equality for whom—blacks as well as whites? Equality when? This last question was of crucial importance to low-income Americans long assured that their opportunity would come if only they waited long enough. It had taken almost a century for the nation to take the primitive step of making child labor illegal.

  The intellectual confusion over equality was sharply reflected in the ancient debate between equality of condition and equality of opportunity. It was in part a false debate, for very few Americans wanted absolute or even sweeping equality of condition. But even the sides of the debate were mixed up. In part because Herbert Hoover and other enlightened conservatives had contended that inequality of condition was acceptable as long as all the “runners” had the same place at the starting line, many on the left spurned that kind of equality as brutal capitalist competitiveness.

  But in fact equality of opportunity was a most radical doctrine. If the nation actually wanted persons to achieve positions for which their basic potentials of intelligence and character fitted them, then government must be more than a referee at the starting line; it must intervene at every point where existing structures of inequality barred people from realizing those potentials. If the nation wanted to open the way for people to realize their “life chance,” then government or some other agency must act early in their lives to help them obtain the motivation, self-assurance, literacy, good health, decent clothes, speech habits, education, job opportunity, self-esteem that would enable them really to compete.

  Neither in action nor in anal
ysis did the government fill this Tocquevillian void. Perhaps the political leadership did not wish to, for granting true equality of opportunity would call for innovative social analysis as well as bold and comprehensive governmental action—would call indeed for a program for children’s rights rivaling earlier programs for the poor, women, and minorities. Some presidential candidates in 1988 were cautiously discussing such policies as much-expanded child care and paid leaves for parents of newborns, but no Marshall Plan for children was in sight.

  The vital need for a set of findings firmly seated in clear and compelling moral principles and linked in turn to explicit policy choices was met, almost miraculously it seemed, in 1984 by the 120-page first draft of the Roman Catholic bishops’ “Pastoral Letter on Catholic Social Teachings and the U.S. Economy.” The letter was unsparing of American leadership. The level of inequality in income and wealth in the nation was morally unacceptable. “The fulfillment of the basic needs of the poor is of the highest priority. Personal decisions, social policies and power relationships must all be evaluated by their effects on those who lack the minimum necessities of nutrition, housing, education and health care.” Again and again the bishops assailed selfishness, consumerism, privilege, avarice, and other ugly characteristics of American society. Speaking from their hearts trained in compassion and their heads trained in moral reasoning, from their pastoral closeness to the needs of people and their experience with government programs, the bishops magnificently filled the gap between high moral principle and explicit economic policy.

 

‹ Prev