The World Turned Inside Out

Home > Other > The World Turned Inside Out > Page 21
The World Turned Inside Out Page 21

by James Livingston


  What if the story told soon after 9/11 had portrayed these men as rationally using the principal weapon of the weak in seeking to redress specific political grievances and to change recent American foreign policy? Clearly, war would not have been the only actionable answer, the only conceivable consequence. Changes in the relevant strategic positions would not necessarily have been the appropriate response either. But we would have known that the American way of life was not at stake in responding to the attacks of 9/11, that maintaining the distinction between law and strategy was necessary, and that bargaining with the enemy was therefore possible. The consequence of this narrative, this knowledge, would be a very different world than the one we now inhabit; at any rate, we can be sure that it would not be on a permanent war footing and that the Middle East would not still be the site of desperate armed struggle.

  But it is notoriously difficult to prove a negative—which is why historians are not supposed to ask “what if” questions. Fortunately, there is another way to prove that, if our purpose is to explain, address, and contain the new terrorism, war is not the answer. It takes us to Iraq in the fourth year of the American invasion, when more than fifty U.S. soldiers and marines were dying every month at the hands of Shi’ite militias in Baghdad and Sunni insurgents in Anbar Province—both factions used recognizably terrorist tactics, including ethnic cleansing and suicide bombers—when “national reconciliation” by means of political compromise between the Muslim sects was still a joke, and when Iran was becoming the key player in the reconstruction of Iraq’s battered infrastructure. This was the year of the so-called surge, which placed thirty thousand additional troops in Baghdad to pacify the center of armed resistance to the American occupation.

  On the face of it, then, the “surge” of 2007 was a military solution to a military problem—the lack of security determined by terrorism, which made the day-to-day give-and-take of pluralist politics unthinkable. And on the face of it, the “surge” worked. Violence in Baghdad plummeted, and the peculiarly vicious Sunni insurgency in Anbar—where officers from Saddam Hussein’s disbanded army had allied with a new offshoot of al Qaeda—quickly receded. By 2008, progress toward “national reconciliation” was enacted in a parliamentary compromise on sharing oil revenues among the Shi’ite majority and the Sunni and Kurdish minorities. In these terms, war was the answer to the rise of terrorist movements of resistance in Iraq: The logic of the larger, global war on terror was validated by the success of the “surge.” Certainly the proponents of the war and the “surge” said as much.

  But in fact, the so-called surge did not work as a military solution. It worked instead as a counterterrorist strategy that acknowledged the primacy of specific political grievances (most of which pertained to perceived inequities of proportionate power within postwar Iraq). Indeed the “surge” could not have worked as a military solution. According to the U.S. military’s new Field Manual, prepared under the direction of the U.S. commander in Iraq during the “surge,” an effective counterinsurgent force requires at least twenty combatants per one thousand members of the local population. By this calculation, 120,000 troops would have been required in Baghdad alone, a city of six million inhabitants. The difference was not, and could not have been, covered by Iraqi forces, which were still ethnically divided and still incapable of supplying their own logistical backing.

  So the rapid fall of terrorist violence in Iraq between mid-2007 and late 2008 was not the result of a military victory. It was instead the result of Iran’s orders to Shi’ite militias in Baghdad and the U.S. military’s negotiations with the Sunni insurgents in Anbar. The most formidable militia in the capital was the Mahdi Army, led by the radical Shi’ite cleric Moktada al-Sadr, whose anti-American rhetoric had made him a national hero. This militia practically disappeared from the streets in 2007, when the force that trained it, Iran’s Revolutionary Guard, ordered it to stand down. Meanwhile, in keeping with the new Field Manual’s political imperatives, the U.S. military bribed the Sunni tribal leaders and former Baathists of Anbar, who in turn armed their followers and disarmed their former allies from al Qaeda in Mesopotamia. In short, a war of position worked where a war of maneuver had failed. Or rather, war as such was not the answer in stabilizing Iraq.

  The original architects of the Open Door believed that war as such was a problem, not a solution to perceived inequities in the distribution of global resources and opportunities, particularly since it spawned revolutionary movements—for example, both the communist and fascist movements of the 1920s were born in the throes of World War I—which in turn closed off huge swaths of the world market and created the conditions for trade war. Again, the Open Door world as conceived by these architects was a place in which military solutions to international disagreements were sometimes necessary but always a last resort; in which armed force did not exhaust the meanings of world power; in which unilateral “preemptive actions” had become obsolete, even destructive; in which multilateral institutions constrained such actions and thus enabled the sovereignty of all nations; in which economic growth and development were the necessary conditions of world peace; in which exclusive spheres of influence were the crucial obstacle to such growth and development; and in which the colonial culture of racism—the “white man’s burden”—could not thrive because it severed the continuum of civilization.

  In addressing and shaping the second stage of globalization, these original architects of the Open Door believed they had found a way to reduce the role of militaries in the articulation of foreign policies and the conduct of international relations. They believed they had found a way to ease the inevitable passage of the seat of empire, perhaps even dispensing with war as its spastic, deadly accompaniment. Perhaps we have accidentally rediscovered their way of thinking in fighting, and losing, a “war on terror” in Afghanistan, in Iraq, and at home. If that is the case, the American Century could have a happy ending after all.

  Coda

  Keep Arguing

  Now that you’ve read this book, what do you know that you didn’t before? That’s the pragmatic test of any idea, category, or assertion. If there are no consequences, you can forget about it.

  A better way to ask the same question is, what could you argue after reading this book? Here’s a preliminary list.

  You could argue that the Reagan Revolution was not a merely conservative movement and that, by their own account, the supply-side radicals who stormed the Keynesian citadel were repulsed by the Congress and the American people. You could argue that socialism was an active dimension of political and intellectual life in the late-twentieth-century United States—and your argument might well be confirmed by the new regulatory apparatus invented to address the economic crisis that began in 2008.

  In short, you could argue that the country kept moving to the left after 1975, so that it was much more liberal at the end of the century than it was before Reagan took office. Not that conservatism was eclipsed—no, the point is that conservatives themselves kept worrying about the social and moral effects of free markets and consumer capitalism and that liberals were meanwhile evolving into what we used to call social democrats.

  You could argue that higher education is better off these days precisely because the Left, broadly construed, took over the pilot disciplines of the liberal arts in the 1980s. You could correlate that takeover with the larger shift of American culture and politics to the left of center. You could even correlate that takeover with the increasing diversity of both the student body and the faculty after the fourth great watershed in higher education. Look around, the university is a very different place than it used to be, and that’s a good thing. Or, maybe not. But now you have an argument about it—and that’s a very good thing.

  You could argue that poststructuralism and postmodernism are not foreign imports but are, instead, homegrown theories that make good sense of the new world of the late twentieth century. You could even say that pragmatism, that venerable American philosophy, is t
he origin of it all, deconstruction included. Maybe you wouldn’t want to go that far. You would, however, want to argue that modern individualism has been in question—a point of contention—since at least the 1890s, so that the notion of a “socially constructed self” was not exactly new in the 1990s. Or maybe it felt new because gender trouble was so clearly its origin.

  You could argue that feminism changed everything, inside academia and out, but also that feminism, like liberalism or conservatism, is an extremely diverse set of attitudes toward history and thus contains a programmatic urge that is anything but uniform. You could also argue that the so-called culture wars of the 1990s were an intramural sport on the Left—or that the right-wing positions in these conflicts kept losing. Either way, you could say that Americans were trending left after Reagan, to the point where arguments over gay rights became a normal part of political discourse and the 1993 Pulitzer Prize went to an avowed Marxist whose play carried the subtitle of a “Gay Fantasia.”

  You could argue that the end of modernity was on view at the Cineplex in the late twentieth century, where horror became the movie mainstream and male masochism became commonplace. In other words, you could say that when the professors started talking about gender trouble in the big words of poststructuralism, they were translating the evidence available on screen, for example, from I Spit on Your Grave, a truly awful movie. Or not—think of The Matrix, which makes Jean Baudrillard and Cornel West, two professors, its patron saints. You could say, in conclusion, let’s get over the either/or choice. By now we know this postmodern impulse goes both ways, up and down, as the line between lowbrow and highbrow culture dissolves in the late twentieth century.

  You could argue that the antirealist tendency of late-twentieth-century TV—cartoons, angels, demons, vampires, everywhere!—signifies the stupidity of the audience. You could argue instead that this tendency has been deeply embedded in the American literary/artistic tradition since at least the 1820s. You could say, as a result, that all those supernatural creatures were gathering to remind us that freedom does not reside in the abolition of our earthly, particular circumstances but rather in our ability to reshape them in accordance with our purposes. You could say that real love requires bodies that matter—it requires passion rather than metaphysics or immortality. You might even say that angels in America have always been a reminder of this life, not the next.

  You could argue that excrement is the raw material of our time—it is the way we recognize and handle the demonic forces and properties of globalization. Or just go ahead and call it a world of shit. It’s where you live, no matter what your address, whether in Guatemala City or New York City. Everybody’s insides are now outside, reminding us that the sacred “interiority” of modern individualism is at risk and that the universalization of exchange value is complete. You could go on to say that it is only in the cartoonish universe of late-twentieth-century TV that this excremental issue has been seriously, comically, and productively addressed, by drawing anal probes from outer space and singing in blackface about hell on earth.

  You could argue that the music of the late twentieth century, in all its demented fury and unruly variations, was a way to address the gender trouble of that millennial time. You could say that the trademark genres of the moment—

  disco, punk, heavy metal, and hip-hop—were ways of reasoning in music that were just as profound as the professorial idiom on offer in the classroom. Maybe more so, because they performed the deconstruction and recombination of identities you heard about in lectures and read about in books.

  You could argue that the “personal” computer is anything but. You could say that it removes you from the scene of modern individualism, where you brood in private, and that it delivers you unto a new world of public, social, anonymous discourse, where you try on new identities. Every day. And you get to Google yourself.

  And you could argue that the American Century was both longer and shorter than you thought. It began earlier than even Henry Luce believed, and it ended only recently, with the militarization of American foreign policy. Or did it? You could argue that we still inhabit the world invented by the Open Door Notes. You could argue that this expansive place has been whirled away by the Bush Doctrine and the very idea of a “war on terror.” Either way, you might be right.

  So let’s keep arguing.

  Appendix

  Their Great Depression and Ours

  This essay was written and published at my blog in October 2008, then reprinted at History News Network (www.historynewsnetwork.org) and excerpted at TNR.com, American Prospect.com, Salon.com, Mother Jones

  .com, and a dozen other websites. It is reprinted here because it speaks directly to the questions raised in chapter 1 about supply-side economics.

  I

  Now that everybody is accustomed to citing the precedent of the Great Depression in diagnosing the recent economic turmoil—and now that a severe recession is unfolding—it may be useful to treat these episodes as historical events rather than theoretical puzzles. The key question that frames all others is simple: Are these comparable moments in the development of American capitalism? To answer this question is to explain their causes and consequences.

  Contemporary economists seem to have reached an unlikely consensus in explaining the Great Depression—they blame government policy for complicating and exacerbating what was just another business cycle. This explanation is still gaining intellectual ground, and it deeply informed opposition to the bailout plan. The founding father here is Milton Friedman, the monetarist who argued that the Federal Reserve unknowingly raised real interest rates between 1930 and 1932 (nominal interest rates remained more or less stable, but as price deflation accelerated across the board, real rates went up), thus freezing the credit markets and destroying investor confidence.

  But the argument that government was the problem, not the solution, has no predictable political valence. David Leonhardt’s piece for the New York Times (10/1/08) is the liberal version of the same argument: if government does its minimal duty and restores liquidity to the credit markets, this crisis will not devolve into the debacle that was the Great Depression. Niall Ferguson’s essay for Time magazine titled “The End of Prosperity” (10/6/08) takes a similar line: “Yet the underlying cause of the Great Depression—as Milton Friedman and Anna Jacobson Schwartz argued in their seminal book A Monetary History of the United States, 1867–1960, published in 1963—was not the stock market crash but a ‘great contraction’ of credit due to an epidemic of bank failures.” Ben Bernanke’s argument for the buyouts and the bailout derives, of course, from the same intellectual source. At Friedman’s ninetieth birthday party in 2002, Bernanke, then a member of the Fed’s board, said, “I would like to say to Milton and Anna: Regarding the Great Depression. You’re right, we did it. We’re very sorry. But thanks to you, we won’t do it again.”

  The assumption that regulates the argument, whether conservative or liberal, is that these two crises are like any other and can be managed by a kind of financial triage, by treating the immediate symptoms and hoping the patient’s otherwise healthy body will bring him back to a normal, steady state. Certain fragile or flamboyant or fraudulent institutions will be liquidated in the normal course of this standard-issue business cycle, and that is a good thing—otherwise the “moral hazard” of validating the “corrupt and incompetent practices of Wall Street and Washington,” as John McCain puts it, will be incurred.

  Crisis management, by this accounting, is an occasional activity that always addresses the same problems of liquidity and moral hazard. By the same accounting, the long-term causes of crisis must go unnoticed and untreated because they are temporary deviations from the norm of market-determined equilibrium and because the system appears to be the sum of its parts—if the central bank steps in with “ready lending” when investor confidence falters, these parts will realign themselves properly and equilibrium will be restored.

  From this standpoint, the Gre
at Depression and today’s economic crisis are comparable not because they resulted from similar macroeconomic causes but because the severity of the credit freeze in both moments is equally great, and the scope of the financial solution must, then, be equally far-reaching. Then and now, as Anna Schwartz explained in an interview with the Wall Street Journal (10/18/08), a “credit tightening” accounts for the collapse of the boom.

  There is another way to explain the Great Depression, of course. It requires looking at the changing structure, or “long waves,” of economic growth and development, digging all the while for the “real,” rather than the merely monetary, factors. This explanatory procedure focuses on “the fundamentals” and typically treats the financial system as a tertiary sector that merely registers the value of goods on offer—except when it becomes the repository of surplus capital generated elsewhere, that is, when personal savings and corporate profits cannot find productive outlets and flow instead into speculative channels.

  The long-wave approach has fallen out of favor as more mainstream economists have adopted the assumptions enabled by the Friedman-Schwartz rendering of monetary history. This structural approach does, however, make room for crisis management at the moment of truth; here, too, the assumption is that financial triage will suffice during the economic emergency. When things settle down, when normal market conditions return, the question of long-term trends will remain.

  The problem with the long-wave approach—the reason it has less traction than the tidy alternative offered by Friedman and Schwartz—is that it cannot specify any connection between macroeconomic realities and conditions in the financial markets. Michael Bernstein’s brilliant book on the origins of the Great Depression, for example, treats the stock market crash of 1929 as a “random event” that complicated and amplified events happening elsewhere in the economy.

 

‹ Prev