The World Turned Inside Out

Home > Other > The World Turned Inside Out > Page 10
The World Turned Inside Out Page 10

by James Livingston


  No, really. Here is what Bork sincerely stated in Slouching Towards Gomorrah: Modern Liberalism and American Decline (1996):

  This [the shrinking number of required courses in college curricula after 1914] confirms a pattern repeatedly suggested in this book: trends slowly moving through an area of life, in this case higher education, until the Sixties when those trends accelerated rapidly. This [antecedent unknown] suggests, as noted earlier, that we would in any event have eventually arrived where the Sixties took us but perhaps two or three decades later. Which [antecedent absconded] in turn suggests that we are merely seeing the playing out of qualities—individualism and egalitarianism—inherent in Western civilization and to some degree unique to that civilization.

  Bork probably should have let Lynne Cheney read the manuscript, especially since they overlapped at the American Enterprise Institute in the early 1990s. She could have warned him off this alarming attack on the pillars of Western civilization. The world-weary William Bennett, Ronald Reagan’s secretary of education, then George Bush’s “drug czar,” and then Bill Clinton’s scold–in–chief, could have, too. In a 1992 book, Bennett, like Cheney, insisted that “we must study, nurture, and defend the West”—that is, Western civilization—in large part because “the West is good.” In the same book, he praised Judge Bork as “perhaps the finest legal mind in America.” Yet he chose to endorse Bork’s doubts about both the egalitarian imperatives of the Declaration and the individualistic premises of Western civilization. Indeed, Bennett’s paperback blurb for Slouching Towards Gomorrah makes it sound like a masterpiece of either social history or pornography: “A brilliant and alarming exploration of the dark side of contemporary American culture.”

  The culture wars inflamed by such determined combatants were fought, for the most part, on a battlefield controlled by the Left—that is, on campus, in the new public sphere of higher education. Did the larger culture give the same homefield advantage to the Left? We begin to address that question in the next chapter by going to the movies.

  chapter four

  “Signs of Signs”

  Watching the End of Modernity at the Cineplex

  Reading for the Ending

  Robert Bork and William Bennett—Mrs. Cheney, too—were of course railing against a culture that reached beyond the campuses. Their purpose was to reform (or debunk) the university and thus to redeem that larger culture. But the evidence suggests that Americans needed no prodding from tenured radicals as they moved in the 1980s and 1990s toward acceptance of equity between races, classes, genders, and sexual preferences, on their way to bemused tolerance of almost anything, including animal rights.

  To be sure, education as such remained the object of cultural critique from self-conscious conservatives. Pat Robertson, for example, the televangelist and presidential contender—he participated forcefully in the Republican primaries and debates of 1988—claimed that the public school system in the United States was “attempting to do something that few states other than the Nazis and Soviets have attempted to do, namely, to take the children away from the parents and to educate them in a philosophy that is amoral, anti-Christian and humanistic and to show them a collectivist philosophy that will ultimately lead toward Marxism, socialism and a communistic type of ideology.” Jimmy Swaggart, another televangelist, was more succinct: “The greatest enemy of our children in this United States . . . is the public school system. It is education without God.”

  Even so, the larger culture exhibited unmistakable signs of rapid, irreversible, and enormous change. The educators themselves, high and low, were being educated by that change. How, then, should we summarize that change—how should we gauge its symptoms?

  One way is to import Samuel P. Huntington’s notion of a “clash of civilizations” as the characteristic divide of the late twentieth century. Huntington was the Harvard political scientist who made his political bones back in the 1960s by planning “forced draft urbanization” in Vietnam—if the peasants aren’t out there in the countryside helping the guerillas (the Viet Cong) because you have removed them to existing cities or concentrated them in newly constructed “urban centers,” he surmised, you can then stage direct military confrontations between American soldiers and communist insurgents. In keeping with his policy-relevant duties in the aftermath of Vietnam, he suggested in a 1995 book that impending global conflicts would turn on cultural (read: religious) divisions rather than the older divisions of political economy, which had placed capitalism and socialism at the opposite extremes of diplomatic decisions and developmental options. The domestic analogue would be the so-called culture wars, which dispensed, for the most part, with arguments about economic arrangements and instead engaged the problems of moral values, civic virtues, and familial integrity—“cultural indicators,” as Bennett called them in a flurry of articles and books.

  Another way to summarize the same great divide is to enlist Daniel Bell and to propose that the “cultural contradictions of capitalism,” as he calls them, reached their apogee in the early 1990s, when the so-called culture wars got formally declared. In the sequel to The Coming of Post-Industrial Society, Bell argued that the American social structure—the mundane routine of work, family, and daily life—“is largely bourgeois, moralizing, and cramped,” the arid domain of “traditional values” but meanwhile the culture “is liberal, urban, cosmopolitan, trendy, fashionable, endorsing freer lifestyles, and permissive.” In other words, the bourgeois values inherited from the nineteenth century became problematic if not merely obsolete in the postindustrial rendition of twentieth-century consumer capitalism. To borrow the terms invented by Raymond Williams, the residual (bourgeois) culture was still committed to deferring gratification, saving for a rainy day, and producing character through hard work, whereas the dominant (capitalist?) culture was already animated by a market-driven hedonism—a “consumer culture”—in which such repressive commitments seemed quaint.

  But notice that the Cultural Left ensconced in the universities was aligned with the bohemian values validated by a postindustrial consumer capitalism, whereas the New Right enfranchised by the churches and the think tanks was opposed to these same values. In this sense, once again, conservatism in the late twentieth century was not a blanket endorsement of what free markets make possible; like the radicalism of the same moment in American history, it was a protest against the heartless logic of the market forces created and enforced by consumer capitalism.

  From either standpoint, however, Huntington’s or Bell’s, we witness a nineteenth-century version of self, family, and nation competing with a twentieth-century version. From either standpoint, bourgeois society struggles to survive against the global tentacles of postindustrial consumer capitalism. Perhaps the impending conclusion of this struggle, the impending extinction of bourgeois society, is what we mean—and is all we can mean—by the end of modernity. The modern world, the “era of the ego” was, after all, created by bourgeois individuals eminently capable of deferring gratification.

  But most Americans were not reading Huntington or Bell in the 1980s and 1990s. Nor were they using Judith Butler’s poststructuralist vocabulary to understand what was happening to them. How then did they experience and explain the end of modernity? The question can be asked in more specific ways. Were these academic theorists just making it up? Or were they making sense of new realities—of fundamental changes? Was there a colloquial, vernacular idiom in which these changes were anticipated, recorded, codified? To answer, let us revisit some hugely popular movies of the late twentieth century—let us see what they require us to experience and explain—and then, in the next chapter, turn to some equally popular cultural forms, TV and music.

  Big Movies, Big Ideas

  To begin with, let us have a look at The Matrix, Terminator II, and Nightmare on Elm Street, each a part of a movie “franchise” in which increasingly intricate—or ironic—sequels retold the same story from new angles. The preposterously complicated plot of the
original Matrix (1999) is almost beside the point. But for those of you who haven’t seen it, here goes. In a postholocaust future that resembles the scorched earth of the Terminator movies, machines have taken over the world: technological hubris has finally put an end to progress. Human beings have been reduced to dynamos whose metabolism is converted into the energy the machines need to—what?—go about their evil business. These benighted human beings just think that they’re going to work on those familiar city streets (the “city” looks like Chicago). In fact, they’re only holograms projected by the machines to keep their energy source happy. As in the Nightmare franchise, appearance and reality are identical, at least in the beginning.

  But an underground movement exists to wake these unwitting creatures up by bringing them out of the Matrix and teaching them how to fight the power on its own holographic terms. This movement recruits Neo, a young blank slate of a man—played of course by Keanu Reeves, a young blank slate of a man—onto whom the underground leader has projected his millennial ambitions. Neo (his screen name) turns out to be the “chosen one” after all; he quickly surpasses his teacher, the leader, and becomes a virtual martial artist who kicks virtual ass.

  Neo learns to enter and disable the Matrix, thus revealing the awful reality beneath the normal, hopeful images that sustain the physical life of the dynamos down on the energy farm. The assumption here is that human beings can’t stay alive without hopes and dreams: if they knew that they were merely cogs in a vast energy-producing machine, they would surely die. By the same token, if they lived in a perfect world, they would know from their experience of Western religion—which insists that you can’t get to heaven until your body expires—that they were dead. In both settings, they would be unhappy, but their hopes for a brighter future that is somehow different from the abiding present would keep them alive; the evil designers of the Matrix introduce imperfection into the grid when they realize this simple truth of human nature.

  In The Matrix, the artificial finally overpowers the real, or rather the natural; meanwhile, the expectation of an end to the illusions of the holographic world finally becomes a religious urge that displaces any residual pretense of science fiction. The monstrous agents of the Matrix are shape-shifting, indestructible machines that inhabit and impersonate human beings. But Neo has no oppositional force or effect against them unless he’s “embodied” as a slice of computer code and inserted into a holographic “reality”—until he’s “embodied” in recognizable human form as a part of a machine. And his triumph over these agents of dehumanization is a result of his belief in himself as the messiah (the “chosen one”), which requires first a consultation with “the Oracle”—a woman who, by the way, inhabits the Matrix, not the scene of resistance—and then the loss of his corporeal form. At any rate, the laws of gravity and mortality no longer apply to our hero by the end of this movie: he has become a godlike creature who can soar like Superman.

  Terminator II (1998; the original was in 1984) has no less of an appetite for biblical gestures and sacrificial rites. But the cyborg from the future who helps save the world from the bad machines isn’t an immaterial, possibly immortal presence like Neo. He’s always embodied. And even though he’s mostly machine—his apparent humanity is only skin-deep—he’s a better father to young John Connor, the leader of the coming rebellion against “Skynet,” than anyone else in view. “This machine was the only thing that measured up,” his mother, Sarah, says while watching the son and the cyborg do manly, mechanical things under the hood of a car.

  In the original installment of the franchise, Sarah Connor is impregnated by a soldier sent back from the postapocalyptic future to protect her from the cyborg intent upon killing her; somehow everybody knows that her offspring will some day lead the rebellion against the machines. In Terminator II, the stakes are even higher. Sarah wants to abort the apocalypse, and her son pitches in with the help of the same model of cyborg that, once upon a time, came after his mother. In doing so, she is of course relieving her son of his heroic duties in the dreaded future—in the absence of real fathers in the flesh, after all, mothers have to do what’s right.

  The apocalypse is finally aborted in three strokes. The Connors and their protector destroy the computer chip from the original cyborg of Terminator I, which has fueled research and profits at the malevolent corporation that invented Skynet, the digital universe of knowledge to be captured by the bad machines on August 29, 1997. Then they defeat a new, more agile and flexible cyborg sent back to kill young John by dipping the thing in molten metal—the end of the movie is shot in what looks like a cross between a foundry and a steel plant, both throwbacks to an imaginary, industrial America where manly men worked hard and earned good pay (Freddy Krueger stalks his teenage victims in a strikingly similar dreamscape, as if what haunts them, too is an irretrievable and yet unavoidable industrial past). Finally, the old, exhausted, even dismembered protector cyborg lowers himself into the same vat of molten metal that had just dispatched his robotic nemesis, thus destroying the only remaining computer chip that could restart the train of events that led to Skynet.

  So the narrative alternatives on offer in Terminator II are both disturbing and familiar: Dads build machines—or just are machines—that incinerate the world, or they get out of the way of the Moms. Like the cowboys and outlaws and gunfighters of the old West, another imaginary landscape we know mainly from the movies, such men might be useful in clearing the way for civilization, but they probably shouldn’t stick around once the women and children arrive.

  The endless sequels to the original Nightmare on Elm Street (1983) follow the trajectory of the Terminator franchise in one important respect—the indomitable villain of the 1980s evolves into a cuddly icon, almost a cult father figure, by the 1990s. But the magnificent slasher Freddy, who punctures every slacker’s pubescent dreams, always preferred the neighborhood of horror, where apocalypse is personal, not political: it may be happening right now, but it is happening to you, not to the world.

  Here, too, however, the plot is almost irrelevant because it turns on one simple device. It works like this. The violence and horror of your worst nightmares are more real than your waking life; the dreamscapes of the most insane adolescent imagination are more consequential than the dreary world of high school dress codes and parental aplomb: welcome to Columbine. Freddy teaches us that the distinction between appearance and reality, the distinction that animates modern science—not to mention the modern novel—is not just worthless, it is dangerous. If you don’t fight him on his own postmodern terms, by entering his cartoonish space in time, you lose your life. If you remain skeptical, in the spirit of modern science or modern fiction, you lose your life.

  The enablers of every atrocity in sight are the parents and the police (the heroine’s father, for example, is the chief of police), who are complacent, ignorant, and complicit, all at once. They killed the child molester Freddy years ago when he was freed on a legal technicality—or at least they thought they killed him—and so his revenge on their children seems almost symmetrical: the vigilantes in the neighborhood are now victims of their own extralegal justice. And their hapless inertia in the present doesn’t help the kids. In fact, their past crimes have disarmed their children. The boys on the scene aren’t much help either—they’re too horny or too sleepy to save anybody from Freddy’s blades, even when the girls explain what will happen if they don’t stand down, wake up, and get right with their bad dreams.

  The Cultural Vicinity of The Matrix

  So what is going on in the cultural vicinity of these hugely popular, truly horrific scenarios? At least the following. First, representations are reality, and vice versa. The world is a fable, a narrative machine, and that’s all it is. The directors of The Matrix make this cinematic provocation clear by placing a book in the opening sequences—a book by Jean Baudrillard, the French theorist who claimed a correlation between finance capital and the historical moment of “simulacra,” when everythi
ng is a copy of a copy (of a copy), not a representation of something more solid or fundamental. At this moment, the reproducibility of the work of art becomes constitutive of the work as art: nothing is unique, not even the artist, and not even you, the supposed author of your own life. Everything is a sign of a sign. The original Nightmare had already proved the same postmodern theorem with more gleeful ferocity and less intellectual pretensions, but it performed the same filmic experiment and provided the same audience experience. Terminator 2 accomplishes something similar by demonstrating that the past is just as malleable as the future: again, the world is a fable waiting to be rewritten.

  Second—this follows from Baudrillard’s correlation of finance capital and the historical moment of simulacra—the world is, or was, ruled by exchange value, monopoly capital, and their technological or bureaucratic offspring. The apocalypse as conceived by both The Matrix and Terminator II is a result of corporate-driven greed (in the latter, the war that arms the machines is fought over oil). An ideal zone of use value beyond the reach of the market, a place where authentic desires and authentic identities are at least conceivable, is the coast of utopia toward which these movies keep tacking. The movies themselves are of course commodities that could not exist without mass markets and mass distribution, but there is no hypocrisy or irony or contradiction lurking in this acknowledgment. Successful filmmakers understand and act on the anticapitalist sensibilities of their audiences—none better than Steven Spielberg. Even so, they know as well as we do that there’s no exit from the mall, only detours on the way.

  Third, the boundary between the human and the mechanical, between sentient beings and inanimate objects, begins to seem arbitrary, accidental, inexplicable, and uncontrollable. Blade Runner (1982) and RoboCop (1987), perhaps the two best movies of the 1980s, are testaments to this perceived breakdown of borders, this confusion of categories: the good guys here are conscientious machines that are more human than their employers. That these heroes are both victims of corporate greed and street gangs does not change the fact that, like the tired old cyborg of Terminator 2, their characters and missions were lifted directly from the Westerns of the 1930s and 1940s—they’re still figuring out what it means to be a man while they clean up Dodge City, but now they know that machines, not lawyers, might do the job better. Again, the artificial overpowers the natural and remakes the world. A fixed or stable reality that grounds all representation and falsifies mere appearance starts to look less detailed and to feel less palpable than the imagery through which we experience it; or rather the experience is the imagery. So the end of Nature, conceived by modern science as an external world of objects with its own laws of motion, is already at hand, already on display. The world has been turned inside out.

 

‹ Prev