Book Read Free

War: What is it good for?

Page 45

by Ian Morris


  Current trends, however, make such sunny prognostications look rather unlikely. Chinese growth will probably slow over the next few decades, but most economists think it will nonetheless remain faster than American economic expansion. The Organisation for Economic Co-operation and Development (OECD), for instance, foresees Chinese growth coming down from 9.5 percent in 2013 to 4.0 percent in 2030, but in no year, it predicts, will the American economy expand by more than 2.4 percent. The Congressional Budget Office is gloomier still, setting a ceiling for American annual growth of 2.25 percent in the 2020s, and some financial analysts foresee long-term American growth averaging just 1.0–1.4 percent per year.

  Most predictions expect China’s economy to outgrow America’s sometime between 2017 and 2027 (probably in 2019, and almost certainly by 2022, says The Economist). According to the accountants at PricewaterhouseCoopers, China’s GDP will be 50 percent bigger than the United States’ in the 2050s, while the OECD’s economists think the gap will be more like 70 percent. And by that point, both sets of experts agree, India’s economy will also be catching up with—or overtaking—America’s (Table 7.1).

  Table 7.1. The post-American world? Top, PricewaterhouseCoopers’s estimates of GDP in the United States, China, and India, 2011–50 (in trillions of 2011 U.S. dollars at purchasing power parity [PPP]); bottom, the Organisation for Economic Co-operation and Development’s estimates, 2012–60 (in trillions of 2005 U.S. dollars at PPP)

  One of the reasons that American military dominance is so overwhelming in the mid-2010s is that the United States not only has a bigger economy than China (roughly $15 trillion versus $12 trillion in 2012, calculated at purchasing power parity) but also spends more of it (4.8 percent versus 2.1 percent) on preparing for war. But that too is changing. Chinese military investment, after more than doubling between 1991 and 2001 and then tripling again in the next decade, will probably slow in the 2010s; but American spending will actually shrink. After failing to find a plan to deal with its $16.7 trillion debt mountain—$148,000 per taxpayer—the American government imposed across-the-board cuts on itself in March 2013. Military spending, which stood at $690 billion in 2012, was capped at $475 billion; by 2023, it will be lower in real terms than it was in 2010.

  It will take China decades to catch up with the American military budget (in 2012, the gap was $228 billion at purchasing power parity), and even then it will probably not have wiped out the lead in morale, command and control, and all-around effectiveness that American forces have built up across a century of preeminence. But that, perhaps, is not the most important point. Britain ceased to be an effective globocop long before any individual foreign power could have beaten its navy in a straight fight, and much the same fate awaits the United States as soon as it can no longer afford armed forces powerful enough to intimidate everyone at once. The 2010s, warns Michael O’Hanlon of the Brookings Institution, will probably force “dramatic changes in America’s basic strategic approach to the world … [and] while hardly emasculating the country or its armed forces, [the cuts] would be too risky for the world in which we live.”

  “The most significant threat to our national security,” the outgoing chairman of the U.S. Joint Chiefs of Staff warned in 2010, “is our debt.” But this in fact understates the problem in two big ways: first, debt is just a symptom of the deeper issue of American relative economic decline (Figure 7.7); and second, the United States’ economic problems threaten the entire world’s security, not just its own.

  Figure 7.7. Slippery slope: the economic decline of the United States relative to the rest of the world—gradual in the 1950s–70s, partially reversed in the 1980s–90s, and precipitous since 2000

  If the downward trend of the last sixty years continues for another forty, the United States will lose the economic dominance it needs to be a globocop. Like Britain around 1900, it may have to farm out parts of its beat to allies, multiplying unknown unknowns. To the rising powers of the 2010s and probably the 2020s too, any move that risks war with the United States smacks of madness. But the payoffs may look very different to the rising powers of the 2030s and 2040s. Absent an American economic revival, the 2050s may have much in common with the 1910s, with no one quite sure whether the globocop can still outgun everyone else.

  The Years of Living Dangerously

  “We are headed into uncharted waters,” warns the National Intelligence Council in Global Trends 2030, the 2012 edition of the strategic foresight report that it presents every four years to the newly elected or reelected American president.7 The real issue in the 2010s, they suggest, is not just that the United States has failed to prevent the emergence of a new rival; it is that the great-power politicking that worried the drafters of the Defense Planning Guidance twenty years ago is in fact just the tip of a much bigger iceberg of uncertainty.

  Deep below the surface, says the council, are seven “tectonic shifts,” playing out slowly across the coming decades: the growth of the global middle class, wider access to lethal and disruptive technologies, the shift of economic power toward the East and the South, unprecedented and widespread aging, urbanization, food and water pressures, and the return of American energy independence. Not all of these will work against the globocop’s interests, but at the very least all seem likely to complicate its job. Nearer the surface, the council sees six “game-changers … questions regarding the global economy, governance, conflict, regional instability, technology, and the role of the United States.” Any of these could blow up at any point, rearranging the geopolitical landscape in a matter of weeks. And right on the waterline, says the council, operating on even shorter timescales, comes a bevy of “black swans”—everything from pandemics, through solar storms that cripple the world’s electricity supply, to the collapse of the euro.

  The unstable years between 1870 and 1914 had uncertainties of their own, but, the council points out, we have now added an entirely new challenge: climate change. Of the hundred billion tons of carbon dioxide that humans have pumped into the air since 1750, a full quarter was belched out between 2000 and 2010. On May 10, 2013, the carbon dioxide in the atmosphere briefly peaked above four hundred parts per million, its highest level in 800,000 years. Average temperatures rose 1.5°F between 1910 and 2010, and the ten hottest years on record have all been since 1998.

  So far, the effects have been fairly small, but the worst impacts have come in what the council calls an “arc of instability” (Figure 7.8). The news from this crescent of poor, arid, politically unstable, but often energy-rich lands is mostly bad. Water flow in the mighty Euphrates River, which irrigates much of Syria and Iraq, has declined by one-third in recent decades, and the water table in its drainage basin dropped by a foot each year between 2006 and 2009. In 2013, Egypt even hinted at war if Ethiopia went ahead with a giant dam on the Nile. Extreme weather will roil the arc with more droughts, more crop failures, and millions more migrants. It is a recipe for more Boer Wars.

  Figure 7.8. Feeling the heat: the darker the shading on a region, the more vulnerable it is to drought. Rich countries, such as the United States, China, and Australia, can pump water from wet regions to dry ones, but poor countries—above all, those inner-rim nations in the arc of instability—cannot. Trouble may be looming if temperatures resume their upward trend in the next few decades.

  The greatest uncertainty, though, is that climate change is an unknown in the fullest sense: scientists simply do not know what will happen next. In 2013, NASA reported that “the five-year mean global temperature has been flat for a decade” (Figure 7.9). This might be good news, meaning that temperatures are less sensitive to carbon levels than climatologists had thought—in which case, global warming might stay at the low end of the estimates, adding just 1°F between 1985 and 2035. Or it might be bad news, meaning that the carbon-climate relationship is more volatile than had been thought—in which case, temperatures will suddenly spike up from their 2002–12 plateau. Few scientific debates have so much strategic significance, b
ut, in what might be a sign of more uncertainty to come, budget cuts forced the CIA to close its Center on Climate Change and National Security at the end of 2012, just days before the Global Trends 2030 report was published.

  Figure 7.9. Strategic science: NASA estimates of global warming, 1910–2010. The gray line shows the average annual temperature and the black line the five-year running average, which—to many scientists’ surprise—has flattened off since 2002.

  But in spite of all this doom and gloom, the National Intelligence Council remains relatively optimistic about the outlook as far forward as 2030, the end point of its study. The globocop will probably face mounting financial pressures but will still be able to do its job; consequently, while “major powers might be drawn into conflict, we do not see any … tensions or bilateral conflict igniting a full-scale conflagration.” Further, the potential death toll from great-power conflicts is currently declining. There are no longer enough nuclear warheads in the world to kill us all: an all-out nuclear exchange in the mid-2010s might kill several hundred million people—more than World War II, but much less than the billion-plus whose lives hung in the balance when Petrov had his moment of truth. And as the 2010s go on, the scale of possible slaughter will probably fall further. All the great powers (except China) plan more nuclear reductions, and in 2013 the United States ruled out any possibility of short-term rearmament by putting its new Los Alamos plutonium production facility on hold because of money problems.

  As well as getting scarcer, warheads have gotten smaller. The bomb is a seventy-year-old technology, invented in an age when explosives tossed out of the back of a plane were lucky to land within half a mile of what they were aimed at. Multimegaton blasts solved the targeting problem by leveling entire cities, but today, when precision-guided munitions can strike within a few feet of their intended victims, these huge, expensive hydrogen bombs look like a solution to a problem that no longer exists. Accurate, low-yield nuclear warheads—or even smart conventional bombs—have largely replaced them.

  Even more remarkably, the computers that make smart bombs possible are also giving us antimissile defenses that actually work. There is still a long way to go, and no shield could currently hold off a serious attack by hundreds of missiles equipped with decoys and countermeasures; but in sixteen tests since 1999, the U.S. Ground-Based Midcourse Defense system hit half of the ICBMs sent against it. In November 2012, Israel’s Iron Dome system did better still, shooting down 90 percent of the slower, short-range rockets fired from the Gaza Strip (Figure 7.10).

  Figure 7.10. Iron Dome: an Israeli antimissile missile on its way to shoot down an incoming rocket over Tel Aviv, November 17, 2012

  In the next decade or two, the computerization of war will go much further, and—initially, at least—almost everything about it will make war less bloody. When the Soviet Union tried to suppress Afghan insurgents in the 1980s, it shelled and carpet bombed their villages, killing tens of thousands of people. Since 2002, by contrast, the United States has handed over more and more of its own counterinsurgency in that country to remotely piloted aircraft. Like precision-guided missiles, drones—as they are commonly called8—are cheaper than alternative tools (about $26 million for a top-of-the-line MQ-9 Reaper as against an anticipated $235 million for an F-35 fighter) and kill fewer people. Estimates of civilian deaths from drone strikes in Afghanistan and Pakistan have become a political football, and vary from the low hundreds to the low thousands, but even the highest figures are much lower than the carnage any other method of going after the same targets (say, by using Special Forces or conventional air raids) would have produced.

  By 2011, air force drones had logged a million active-service flight hours, flying two thousand sorties in that year alone. The typical mission involves drones loitering fifteen thousand feet above a suspect, unseen and unheard, for up to three weeks. Sophisticated cameras (which account for a quarter of the cost of an MQ-1 Predator) record the target’s every move, beaming pictures back through a chain of satellites and relay stations to Creech Air Force Base in Nevada. Here, two-person crews sit in cramped but cool and comfortable trailers (I had the opportunity to visit one in 20139) for hour after hour, watching the glowing monitors to establish the suspects’ “patterns of life.”

  Much of the time, the mission goes nowhere. The suspect turns out to be just an ordinary Afghan, falsely fingered by an angry or hypervigilant neighbor. But if the cameras do record suspicious behavior, ground forces are called in to make an arrest, usually in the dead of night to reduce the risk of a shoot-out. If alert insurgents—woken by the roar of helicopters and Humvees—creep or run away (“leakers” and “squirters,” air force pilots call them), a drone “sparkles” them with infrared lasers, invisible to the naked eye but allowing troops with night-vision gear to make arrests at their own convenience. The mere possibility of attracting drones’ attention has hamstrung jihadists: the best plan, an advice sheet for Malian insurgents warned in 2012, was to “maintain complete silence of all wireless contacts” and to “avoid gathering in open areas”—hardly a recipe for effective operations.

  Drones have become the eyes and ears of counterinsurgency in Afghanistan, and in about 1 percent of missions they also become its teeth. Tight rules of engagement bind air force crews, but when a suspect does something clearly hostile—such as setting up a mortar in the back of a truck—the pilot can squeeze a trigger on a joystick back in Nevada, killing the insurgent with a precision-guided Hellfire missile. (In Pakistan and Yemen, where the United States is technically not at war, the CIA has separate, secret drone programs. With different rules of engagement and fewer options to use ground forces, these probably use missiles and bombs more often than the air force, but here too, civilian casualties fell sharply between 2010 and 2013.)

  Drones are the thin end of a robotic wedge, which is breaking apart conventional fighting done by humans. The wedge has not widened as quickly as some people expected (in 2003, a report from the U.S. Joint Forces Command speculated that “between 2015 and 2025 … the joint force could be largely robotic at the tactical level”), but neither has it gone as slowly as some naysayers thought. “It is doubtful that computers will ever be smart enough to do all of the fighting,” the historian Max Boot argued in 2006, leading him to predict that “machines will [only] be called upon to perform work that is dull, dirty, or dangerous.”

  The actual outcome will probably be somewhere between these extremes, with the trend of the last forty years toward machines taking over the fastest and most technically sophisticated kinds of combat accelerating in the coming forty. At present, drones can only operate if manned aircraft first establish air superiority, because the slow-moving robots would be sitting ducks if a near-peer rival contested the skies with fighters, surface-to-air missiles, or signal jammers. Flying a drone over Afghanistan from a trailer in Nevada is an odd, out-of-body experience (I was given a few minutes on a simulator at Creech Air Force Base), because the delay between your hand moving the joystick and the aircraft responding can be as much as a second and a half as the signal races around the world through relay stations and satellite links. Better communications, or putting the pilots in trailers in theater, can shorten the delay, but the finite speed of light means it will never go away. In the Top Gun world of supersonic dogfights, milliseconds matter, and remotely piloted aircraft will never be able to compete with manned fighters.

  The solution, an air force study suggested in 2009, might be to shift from keeping humans in the loop, remotely flying the aircraft, to having them merely “on the loop.” By this, the air force means deploying mixed formations, with a manned plane acting as wing leader for three unmanned aircraft. Each robot would have its own task (air-to-air combat, suppressing ground fire, bombing, and so on), with the wing leader “monitoring the execution of certain decisions.” The wing leader could override the robots, but “advances in AI [artificial intelligence] will enable systems to make combat decisions and act within legal and pol
icy constraints without necessarily requiring human input.”

  Unmanned jet fighters are already being tested, and in July 2013 one even landed on the rolling deck of an aircraft carrier (Figure 7.11), one of the most difficult tasks a (human) navy flier ever has to perform. By the late 2040s, the air force suggests, “technology will be able to reduce the time to complete the OODA [observe, orient, decide, and act] loop to micro- or nano-seconds.” But if—when—we reach that point, the obvious question will come up: Why keep humans on the loop at all?

  Figure 7.11. Look, no hands! A Northrop Grumman X.47B robot stealth fighter roars past the USS George H. W. Bush in 2013, just before becoming the first unmanned plane ever to land itself on the deck of an aircraft carrier.

  The answer is equally obvious: because we do not trust our machines. If the Soviets had trusted Petrov’s algorithms in 1983, perhaps none of us would be here now, and when the crew of the USS Vincennes did trust their machines in 1988, they shot down an Iranian passenger jet, killing 290 civilians. No one wants more of that. “We already don’t understand Microsoft Windows,” a researcher at Princeton University’s Program on Science and Global Security jokes, and so “we’re certainly not going to understand something as complex as a humanlike intelligence. Why,” he goes on to ask, “should we create something like that and then arm it?”

  Once again, the answer is obvious: because we will have no choice. The United Nations has demanded a moratorium on what it calls “lethal autonomous robotics,” and an international Campaign to Stop Killer Robots is gaining traction, but when hypersonic fighter planes clash in the 2050s, robots with OODA loops of nanoseconds will kill humans with OODA loops of milliseconds, and there will be no more debate. As in every other revolution in military affairs, people will make new weapons because if they do not, their enemies might do so first.

 

‹ Prev