The World Turned Inside Out

Home > Other > The World Turned Inside Out > Page 7
The World Turned Inside Out Page 7

by James Livingston


  Here is how the president of Harvard University acknowledged this fact:

  The last century has certainly been marked by an apparent increase in the power of corporate, as compared with personal, motives. A hundred years ago democratic theories were individualistic. They treated the state as a sum of equal and independent units. Now we have learned that man is a social being, not only in Aristotle’s sense, that he is constrained by his nature to be a member of a state, but also in the broader sense that he is bound by subtle ties to other and smaller groups of persons. . . . We have learned to recognize this; and what is more, with the ease of organization fostered by modern conditions, the number, the complexity, and the aggregate strength of such ties have increased. No one can have observed social life carefully, under any aspect, without seeing that cooperative interests have in some measure replaced personal ones; that in its conscious spirit western civilization has become less individualistic, more highly organized, or, if you will, more socialistic.

  These bracing lines were written by A. Lawrence Lowell and published in a book of 1913 called Public Opinion and Popular Government. We still inhabit the world he described, but we are just catching up to him; for he treated the decay of the old pioneer individualism as an irreversible historical event—a mere fact—rather than the end of the world or the dawn of the millennium. The resolution of the culture wars might, then, be better negotiated from his standpoint than from any side in contemporary debates on the uses of the university and the purposes of group identities. If we adopted Lowell’s standpoint, for example, we would at least be in a position to understand that the eclipse of the self is not a political agenda disguised or illuminated or created by tenured radicals and their academic jargon. We would at last be in a position to understand that “the” self has a history.

  We might also be in a position to understand that the group identities derived from voluntary associations are neither threats to anyone’s autonomy as an individual nor constraints on anyone’s devotion to the American nation as such. In fact, to borrow Lowell’s perspective is to see that group identities are the condition, not the negation, of individualism—it is to realize that you come to know yourself as a unique bearer of a distinct personality only insofar as you recognize others as your equals and participate with them in communities of interest. To borrow that perspective is to see, moreover, that the group identities gathered under the ungainly heading of race, class, and gender may well have become the condition, not the negation, of a nationalism or a patriotism that reaches beyond the narrow elites who, once upon a time (and not so long ago), defined it as the passive acceptance of a white, Anglo-Saxon, Protestant culture.

  The United States has always been a destination with lots of rough edges and porous borders rather than a unitary nation-state; its inhabitants and citizens have always been orphans, renegades, and castaways who came from elsewhere. This polyglot people have never shared a common racial stock, a linguistic affinity, or a national origin, not even in the seventeenth century. And yet, until the late twentieth century, the dominant culture was undeniably WASPish, perhaps because the national identity was still a function of narratives centered on the founding. As things loosened up in the 1960s and 1970s, due to the civil rights movement, the women’s movement, and the mongrel music called rock and roll—the music that both amplified and abolished the color line—the cultural content of the American nation became more complex, more problematic. And so, too, did the question of representation. The plural became political.

  Indeed, by the 1980s, “we, the people” became a problem rather than a premise of political deliberation. But those who wanted to emphasize race, class, and gender—or ethnicity—in imagining America had a certain advantage in solving the problem. For it is easier to preserve, protect, and defend a nation you have helped to create than to identify with a political tradition that excludes you. If the history books tell you, for example, that slaves, workers, immigrants, and women were just as important in building America as the educated white men wearing suits—just as important as the conspicuous individuals who wrote the constitutions and gave the speeches and made the fortunes—then the scope of that imagined community, that America, gets broader, and its moral compass spins faster. The mere possibility that is “our America” becomes yours, no matter where you came from. The nation gets bigger, maybe even better, as a result.

  Reuniting America?

  But the critics of this subdivision had a point in objecting to the centrifugal forces of race, gender, and class. With what were Americans supposed to identify, having validated their claims to a particular, probably ethnic past that did not always intersect with the Western European heritage of the founders—that is, with the legacy of the Enlightenment? As Lynne Cheney, the embattled chair of the National Endowment for the Humanities (NEH), said repeatedly in the late 1980s, there’s not much to identify with if the best-selling textbooks emphasize that the enslavement of Africans, the annihilation of Indians, the conquest of Mexicans, the exploitation of workers, the oppression of women, and the reinvention of imperialism were the major accomplishments of the light-skinned sons of Enlightenment. By this textbook accounting, there can be no usable past—only escape routes from a sordid history, only safe distance from all holocausts. But what narrative could certify the Enlightenment, a.k.a. Western civilization, as the idiot sire of human rights, the abusive father of freedom, and the obvious origin of America, all at once?

  To put it more plainly, how do you endorse a culture that destroyed indigenous peoples, prospered from slavery, celebrated capitalism—that is, possessive, acquisitive individualism—confined women to the home, tolerated racial segregation, and then made a military mess of the world after 1945? That was the question posed by the controversy over the National History Standards in the early 1990s. These Standards were first developed at the University of California, Los Angeles, with a grant from the NEH and input from scholars around the world, then published in 1994 as two complementary volumes, one for U.S. history, the other for world history. Their most outspoken and effective critic was Lynne Cheney, who in 1993 left her position as chair of the NEH to become the W. H. Brady Jr. Distinguished Fellow at the American Enterprise Institute; there she found enough research assistants to produce a book, Telling the Truth (1995), which denounced the Standards and their postmodern sources in the universities.

  Cheney was outraged by “oversimple versions of the American past that focus on the negative” and singled out the Standards as “the most egregious example to date” of this “hypercritical” tendency. But she couldn’t pose as the Pangloss of the late twentieth century, not after the great transformation of higher education. “We should not, of course retreat into the old myths,” she declared, and insisted that “No one is suggesting that we hide our flaws or neglect the achievements of others.” Even so, she had two serious concerns about the political implications of the new, hypercritical curriculum. First, it sponsored fundamental change, or at least deviation from the received tradition: “For those intent on political and social transformation, a bleak version of history is better than a balanced one. The grimmer the picture, the more heavily underscored is the need for the reforms they have in mind.” Second, it drained the popular sources of patriotism and, in doing so, it disabled American foreign policy, or at least the application of military power overseas: “As American students learn more about the faults of this country and about the virtues of other nations, . . . they will be less and less likely to think this country deserves their special support.” In other words, “they will not respond to calls to use American force.” If students understood that “the American system has uniquely nurtured justice and right,” by contrast, they might identify with that system to the ideological extent required by its military obligations abroad.

  So Cheney was not objecting to the insertion of politics into the

  classroom—the projection of American power abroad enabled by a more patriotic curriculum is
surely a political purpose. She was objecting instead to the particular brand of politics being peddled by a new generation of professors, which promoted radical departures from the past and endangered the national interest in the present. In her narrative, by the 1980s, “a new group of academics was coming into power who viewed the humanities as a political tool, a weapon to be wielded in a variety of causes, but most especially multiculturalism and feminism.” This group was clearly in thrall to a postmodern way of thinking that authorized pragmatism, relativism, and deconstruction—“radical skepticism”—by denying the existence of any reality independent of one’s perspective on reality. Here is how Cheney summarized the indictment:

  In fields ranging from education to art to law, the attack on truth has been accomplished by an assault on standards. The connection is seldom made clear. Indeed, one of the characteristics of postmodern thought is that it is usually asserted rather than argued, reasoned argument having been rejected as one of the tools of the white male elite. But the thinking seems roughly to be that absent external reality, distinctions of any kind are meaningless. No accomplishment can be judged superior to any other—except as it promotes the interests of desired [sic] groups. Without the objective measures that an external reality would provide, who can really say, for example, that the work of some students is better than that of others?

  Let us look more closely, then, at the properties of this postmodern thinking, in which appearance and reality seem to coincide. Doing so will help us decipher the other key words on our list, from pragmatism and relativism to deconstruction and feminism. Then we can decide where the truth really lies.

  chapter three

  The Creators and Constituents of the “Postmodern Condition”

  The Geographies of the Postmodern

  The notion of the postmodern, with all its weird connotations, is powerfully associated with the French philosopher Jean-François Lyotard, whose book of 1979 was written as a “report on the state of knowledge in the Western world” at the request of the provincial government of Quebec (the English translation appeared in 1984 with a foreword by Fredric Jameson, who, as we shall see, had a lot to say about postmodernism long after Lyotard became a mere footnote). For that reason, the very idea of the postmodern has always seemed a dangerous foreign import, an intellectual contaminant from the other shore.

  Lynne Cheney spelled out this attitude in Telling the Truth by tracing the idea’s frightful origins to the theories of Michel Foucault, another French philosopher affiliated with Lyotard and his sources, Gilles Deleuze and Felix Guattari: “Foucault provided a method for continuing revolutionary activity long after American troops had withdrawn from Vietnam. His ideas were nothing less than an assault on Western civilization. In rejecting an independent reality, an externally verifiable truth, and even reason itself, he was rejecting the foundational principles of the West.” Alan Bloom thoroughly revised this extra-American genealogy in The Closing of the American Mind (1987), yet another best-selling funeral oration on the genteel university that disappeared in the great transformation of higher education—in his view, the “master lyricists” of postmodern academic jargon were two German philosophers, Friedrich Nietzsche and Martin Heidegger, rather than Foucault, Lyotard, and their French comrades. But he, too, claimed that professors and students alike were driving a foreign import without a license: “Our intellectual skyline has been altered by German thinkers even more radically than has our physical skyline by German architects.”

  The truth is rather more complicated. In fact, there are at least four ways to argue that the idea of a postmodern condition has undeniably American origins. To begin with, Lyotard himself acknowledged that he was introduced to this idea in the mid-1970s while attending conferences in the United States, where “the postmodern” usually meant an impending practical break in the continuum of architectural assumptions, not a theoretical problem. As Ihab Hassan, who applied the term to twentieth-century literature in an influential book of 1971, later noted, “the postmodern debate drifted from America to Europe” in the 1970s. Furthermore, the smoldering intellectual controversy over the durability and reliability of an “external” or an “independent reality”—the heart of the postmodern matter, by all accounts—was rekindled in the late 1950s by Thomas Kuhn, a historian of science at Harvard, not started in the late 1960s by French philosophers inclined to insurrection. What is even more, every question (re)opened by this controversy had been asked repeatedly in the course of the twentieth century, mainly by the American pragmatists, from John Dewey to Richard Rorty, who got their start by reading William James. Finally, the German designers of the contemporary intellectual skyline were themselves indebted, and for the most part consciously so, to these pragmatist antecedents, especially to James.

  In his pathbreaking book, The Structure of Scientific Revolutions (1962), Kuhn argued that change or progress in science was not cumulative or

  incremental—old theories did not give way to new ones when “the” facts required it. Instead, theories themselves produced facts that could not be acknowledged or challenged by their rivals. For example, the Copernican revolution of the sixteenth century (the subject of Kuhn’s first book) produced different facts about planetary movement than its Ptolemaic predecessor because it took up a different position in observing that movement. As scientific assumptions and models changed in such “paradigm shifts,” so too did the observable world and what could be done about it—what had been excluded from view by earlier perspectives was now available for close scrutiny and active manipulation. By the same token, what had once been included was now invisible and unimportant: it was no longer actionable.

  The intellectual consequences of this argument are daunting. If there is no body of fact “out there” somewhere, a body of fact that is somehow independent of our assumptions and models—if, as Cheney put it, there are no “objective measures that an external reality would provide”—how do we decide between rival accounts of the same events, the same phenomena, on rational grounds? Do we just give up on reason in adjudicating our political differences and admit that all we have are personal opinions? Absent a fixed, external reality to which all parties can appeal, doesn’t relativism reign?

  The pragmatists always had good answers to these questions. Rational ground was not reached, they said, when reason threw desire overboard according to the protocols of Enlightenment—reason and desire, fact and value, were inextricably linked. Mind and matter were, too. “Matter is effete mind, inveterate habits becoming physical laws,” as an early pragmatist put it. Objectivity could not be obtained by pretending that there was a view of the world from nowhere, from outside the time and space we must inhabit as mortal beings: we can’t peek over the edges of our existence as if we’re not there, as God or Santa can do if he wants to check on us. So every kind of knowledge is historically situated. What we know is always a function of what we want to know and of what we can know, given our values, purposes, and technological capacities. Certainly there is no fixed, external reality in the pragmatist purview—there is no body of fact independent of our assumptions and models.

  How, then, can we decide between rival accounts of the same events, the same phenomena, on rational grounds? Not by appeal to “the” facts, because these change insofar as our assumptions and models change, insofar as our values and purposes and technological capacities change. No, one account is better than another when it includes and transcends its rivals by showing how the questions raised and the facts produced by these rivals cannot be addressed and explained in their own terms. The rival accounts always remain, however, as special cases within the new paradigm. For example, Newtonian physics was contained and completed, not obliterated, by Einstein’s theories of relativity. Or, to frame it as a social rather than a scientific problem, the older pioneer individualism was not erased by the newer social selfhood—

  by the new group identities that cultural pluralism and cultural politics recognized—but its cognitiv
e status and political function were profoundly changed.

  Notice that this pragmatic approach to the problem of relativism is

  historical rather than philosophical and that it won’t allow opinion to stand in for argument. If you can’t show how, why, and when the rival account lost its explanatory adequacy—if you can’t tell the story of its intellectual exhaustion—and then demonstrate that your account conserves its significant findings, you haven’t staked a claim to anything more than a different opinion. “Objectivity” as Cheney understands it does take a beating

  here, mainly because external reality becomes fluid and malleable. But then modern science was always based on the pragmatic assumption that if you want to know the truth about external reality—the observable world of objects—you have to manipulate it. Modern scientists (as opposed to alchemists, philosophers, and mathematicians) have taught us that you can’t just posit, say, “the music of the spheres” or the existence of a “prime mover” called God, and go on from there to specify, logically, the necessary shape of the universe; you have to go into a laboratory and reproduce the motion of real, physical objects as you believe that motion must happen, and then verify your results. Modern scientists have taught us that the only way to interpret the world is to change it.

  The American pragmatists of the early twentieth century were modern scientists in this truly scary sense. They challenged and inspired the European philosophers who would remake the intellectual world of the twentieth century as such—from émile Durkheim, Alexandre Koyré, Georges Sorel, Jean Wahl, and Alexandre Kojève in France, to Edmund Husserl, Ludwig Wittgenstein, Georg Lukacs, Emil Lask, and Martin Heidegger in Germany—

  who would in turn challenge and inspire the transatlantic cohort of social theorists that invented the idea of the postmodern condition in the 1960s and 1970s.

 

‹ Prev