War: What is it good for?

Home > Other > War: What is it good for? > Page 46
War: What is it good for? Page 46

by Ian Morris


  Battle, the former U.S. Army lieutenant colonel Thomas Adams suggests, is already moving beyond “human space” as weapons become “too fast, too small, too numerous, and … create an environment too complex for humans to direct.” Robotics is “rapidly taking us to a place where we may not want to go, but probably are unable to avoid.” (I heard a joke at Nellis Air Force Base: the air force of the future will consist of just a man, a dog, and a computer. The man’s job will be to feed the dog, and the dog’s job will be to stop the man from touching the computer.)

  Current trends suggest that robots will begin taking over our fighting in the 2040s—just around the time, the trends also suggest, that the globocop will be losing control of the international order. In the 1910s, the combination of a weakening globocop and revolutionary new fighting machines (dreadnoughts, machine guns, aircraft, quick-firing artillery, internal combustion engines) ended a century of smaller, less bloody wars and set off a storm of steel. The 2040s promise a similar combination.

  Opinions vary over whether this will bring similar or even worse results than the 1910s saw. In the most detailed (or, according to taste, most speculative) discussion, the strategic forecaster George Friedman has argued that hugely sophisticated space-based intelligence systems will dominate war by 2050. He expects American power to be anchored on a string of these great space stations, surrounded and protected by dozens of smaller satellites, in much the same way that destroyers and frigates protect contemporary aircraft carriers. These orbiting flotillas will police the earth below, partly by firing missiles but mainly by collecting and analyzing data, coordinating swarms of hypersonic robot planes, and guiding ground battles in which, suggests Friedman, “the key weapon will be the armored infantryman—a single soldier, encased in a powered suit … Think of him as a one-man tank, only more lethal.”

  The focus of mid-twenty-first-century fighting—what Clausewitz called the Schwerpunkt—will be cyber and kinetic battles to blind the space flotillas, followed by attacks on the power plants that generate the vast amounts of energy that the robots will need. “Electricity,” Friedman speculates, “will be to war in the twenty-first century as petroleum was to war in the twentieth.” He foresees “a world war in the truest sense of the word—but given the technological advances in precision and speed, it won’t be total war.” What Friedman means by this is that civilians will be bystanders, looking on anxiously as robotically augmented warriors battle it out. Once one side starts losing the robotic war, its position will quickly become hopeless, leaving surrender or slaughter as the only options. The war will then end, leaving not the billion dead of Petrov’s day, or even the hundred million of Hitler’s, but, Friedman estimates, more like fifty thousand—only slightly more than die each year in automobile accidents in the United States.

  I would like to believe this relatively sunny scenario—who wouldn’t?—but the lessons of the last ten millennia of fighting make it difficult. The first time I raised the idea of revolutions in military affairs, back in Chapter 2, I observed that there is no new thing under the sun. Nearly four thousand years ago, soldiers in southwest Asia had already augmented the merely human warrior by combining him with horses. These augmented warriors—charioteers—literally rode rings around unaugmented warriors plodding along on foot, with results that were, in one way, very like what Friedman predicts. When one side lost a chariot fight around 1400 B.C., its foot soldiers and civilians found themselves in a hopeless position. Surrender and slaughter were their only options.

  New kinds of augmentation were invented in first-millennium-B.C. India, where humans riding on elephants dominated battlefields, and on the steppes in the first millennium A.D., where bigger horses were added to humans to produce cavalry. In each case, once battle was joined, foot soldiers and civilians often just had to wait as the pachyderms or horsemen fought it out, hoping for the best. Once again, whoever lost the animal-augmented fight was in a hopeless position.

  But there the similarities with Friedman’s scenario end. Chariots, elephants, and cavalry did not mount surgical strikes, skillfully destroying the other side’s chariots, elephants, and cavalry and then stopping. Battles did not lead to cool calculations and the negotiated surrender of defenseless infantry and civilians. Instead, wars were no-holds-barred frenzies of violence. When the dust settled after the high-tech horse and elephant fighting, the losers regularly got slaughtered whether they surrendered or not. The age of chariots saw one atrocity after another; the age of elephants was so appalling that the Mauryan king Ashoka foreswore violence in 260 B.C.; and the age of cavalry, all the way from Attila the Hun to Genghis Khan, was worse than either.

  All the signs—particularly on the nuclear front—suggest that major wars in the mid-twenty-first century will look more like these earlier conflicts than Friedman’s optimistic account. We are already, according to the political scientist Paul Bracken, moving into a Second Nuclear Age. The First Nuclear Age—the Soviet-American confrontation of the 1940s–80s—was scary but simple, because mutual assured destruction produced stability (of a kind). The Second Age, by contrast, is for the moment not quite so scary, because the number of warheads is so much smaller, but it is very far from simple. It has more players than the Cold War, using smaller forces and following few if any agreed-on rules. Mutual assured destruction no longer applies, because India, Pakistan, and Israel (if or when Iran goes nuclear) know that a first strike against their regional rival could conceivably take out its second-strike capability. So far, antimissile defenses and the globocop’s guarantees have kept order. But if the globocop does lose credibility in the 2030s and after, nuclear proliferation, arms races, and even preemptive attacks may start to make sense.

  If major war comes in the 2040s or ’50s, there is a very good chance that it will begin not with a quarantined, high-tech battle between the great powers’ computers, space stations, and robots but with nuclear wars in South, southwest, or East Asia that expand to draw in everyone else. A Third World War will probably be as messy and furious as the first two, and much, much bloodier. We should expect massive cyber, space, robotic, chemical, and nuclear onslaughts, hurled against the enemy’s digital and antimissile shields like futuristic broadswords smashing at a suit of armor, and when the armor cracks, as it eventually will, storms of fire, radiation, and disease will pour through onto the defenseless bodies on the other side. Quite possibly, as in so many battles in the past, neither side will really know whether it is winning or losing until disaster suddenly overtakes it or the enemy—or both at once.

  This is a terrifying scenario. But if the 2010s–50s do rerun the script of the 1870s–1910s, with the globocop weakening, unknown unknowns multiplying, and weapons growing ever more destructive, it will become increasingly plausible.

  The New England saying, then, may be true: perhaps we really can’t get there from here.

  Unless, that is, “there” isn’t where we think it is.

  Come Together

  The secret of strategy is knowing where you want to go, because only then can you work out how to get there. For more than two hundred years, campaigners for peace have been imagining “there”—a world without war—in much the way that Kant did, as something that can be brought into being by a conscious decision to renounce violence. Margaret Mead insisted that war is something we have invented, and therefore something we can uninvent. The authors of “War” suggested that standing up and shouting that war is good for absolutely nothing would end it. Political scientists tend to be less idealistic, but many of them also argue that conscious choice (this time, to build better, more democratic, and more inclusive institutions) will get us there from here.

  The long-term history I have traced in this book, however, points in a very different direction. We kill because the grim logic of the game of death rewards it. On the whole, the choices we make do not change the game’s payoffs; rather, the game’s payoffs change the choices we make. That is why we cannot just decide to end war.

>   But long-term history also suggests a second, and more upbeat, conclusion. We are not trapped in a Red Queen Effect, doomed to rerun the self-defeating tragedy of globocops that create their own enemies until we destroy civilization altogether. Far from keeping us in the same place, all the running we have done in the last ten thousand years has transformed our societies, changing the payoffs in the game; and in the next few decades the payoffs look likely to change so much that the game of death will turn into something entirely new. We are beginning to play the endgame of death.

  To explain what I mean by this rather cryptic statement, I want to step back from the horrors of war for a moment to take up some of the arguments in my two most recent books, Why the West Rules—for Now and The Measure of Civilization. As I mentioned at the end of Chapter 2, in these publications I presented what I called an index of social development, which measures how successful different societies have been at getting what they wanted from the world across the fifteen thousand years since the last ice age. The index assigned social development scores on a scale from 0 points to 1,000, the latter being the highest score possible under the conditions prevailing in the year A.D. 2000, where the index ended.

  Armed with this index, I asked—partly tongue in cheek and partly not—what would happen if we projected the scores forward. As with any prediction, the results depend on what assumptions we make, so I took a deliberately conservative starting point, asking how the future will shape up if development continues increasing in the twenty-first century just at the pace it did in the twentieth. The result, even with such a restrictive assumption, was startling: by 2100, the development score will have leaped to 5,000 points. Getting from a caveman painting bison at Lascaux to you reading this book required development to rise by 900 points; getting to 2100 will see it increase by another 4,000 points.

  “Mind-boggling” is the only word for such a prediction—literally, because one of the major implications of such soaring development is that the human mind itself will be transformed during the century to come. Computerization is not just changing war: it is changing everything, including the animals that we are. Biological evolution gave us brains so powerful that we could invent cultural evolution, but cultural evolution has now reached the point that the machines we are building are beginning to feed back into our biological evolution—with results that will change the game of death into an endgame of death, with the potential to make violence irrelevant.

  It is hard to imagine anything that could be more important for the future of war, but in conversations over the last year or two I have noticed a deep disconnect between how technologists and security analysts see the world. Among technologists, there seems to be no such thing as over-optimism; everything is possible, and it will all turn out better than we expect. In the world of international security, however, the bad is always about to get worse, and things are always scarier than we had realized. Security analysts tend to dismiss technologists as dreamers, so lost in utopian fantasies that they cannot see that strategic realities will always override technobabble, and technologists often deride the security crowd as dinosaurs, so stuck in the old paradigm that they cannot see that computerization will sweep their worries away.

  There are exceptions, of course. The National Intelligence Council’s reports try to bring both points of view together, as does the recent book The New Digital Age, co-authored by the technologist Eric Schmidt and the security expert Jared Cohen. Trying to build on their examples—schizophrenic as the experience can be—I devote the rest of this section to the technologists’ projections, turning to the reality check of security concerns in the section that follows. The combination produces a vision of the near future that is both uplifting and alarming.

  The technologists’ starting point is an obvious fact: computers powerful enough to fly fighter jets in real time will be powerful enough to do a lot more too. Just how much more, no one can say for sure, but hundreds of futurists have made their best guesses anyway. Not surprisingly, no two agree on very much, and if there is anything we can be certain of, it is that these visions are at least as full of errors as the century-old science fiction of Jules Verne and H. G. Wells. But by the same token, when taken in bulk rather than tested one speculation at a time, today’s futurists also resemble those of late-Victorian times in recognizing a set of broad trends transforming the world—and when it came to broad trends, Verne and Wells were arguably right more often than they were wrong.

  The biggest area of agreement among contemporary futurists (and the mainstay of the Matrix movies) is that we are merging with our machines. This is an easy prediction to make, given that we have been doing it since the first cardiac pacemaker was fitted in 1958 (or, in a milder sense, since the first false teeth and wooden legs). The twenty-first-century version, however, is much grander. Not only are we merging with our machines; through our machines, we are also merging with each other.

  The idea behind this argument is very simple. Inside your brain, that 2.7 pounds of magic that I said so much about in Chapter 6, 10,000 trillion electrical signals flash back and forth every second between some twenty-two billion neurons. These signals make you who you are, with your unique way of thinking and the roughly ten trillion stored pieces of information that constitute your memory. No machine yet comes close to matching this miracle of nature—although the machines are gaining fast.

  For half a century, the power, speed, and cost-effectiveness of computers have been doubling every year or so. In 1965, a dollar’s worth of computing on a new, superefficient IBM 1130 bought one one-thousandth of a calculation per second. By 2010, the same dollar/second bought more than ten billion calculations, and by the time this book appears in 2014, the relentless doubling will have boosted that above a hundred billion. Cheap laptops can do more calculations, and faster, than the giant mainframes of fifty years ago. We can even make computers just a few molecules across, so small that they can be inserted into our veins to reprogram cells to fight cancer. Just a century ago, it would all have seemed like sorcery.

  We only need to extend the trend line out as far as 2029, observes Ray Kurzweil (the best known of the technological futurists, and now director of engineering at Google too), to get scanners powerful enough to map brains neuron by neuron and computers powerful enough to run the programs in real time. At that point, Kurzweil claims, there will effectively be two of you: one the old, unimproved, biological version, decaying over time, and the other a new, unchanging, machine-based alternative. Better still, says Kurzweil, the machine-based minds will be able to share information as easily as we now swap files between computers, and by 2045, if the trends hold, there will be supercomputers powerful enough to host scans of all eight billion minds in the world. Carbon- and silicon-based intelligence will come together in a single global consciousness, with thinking power dwarfing anything the world has ever seen. Kurzweil calls this moment the Singularity—“a future period during which the pace of technological change will be so rapid, its impact so deep … that technology appears to be expanding at infinite speed.”

  These are extraordinary claims. Naturally, there are plenty of naysayers, including some leading scientists as well as rival futurists. They are often blunt; the Singularity is just “the Rapture for Nerds,” says the science-fiction author Ken MacLeod, while the influential technology critic Evgeny Morozov thinks that all this “digito-futuristic nonsense” is nothing more than a “Cyber-Whig theory of history.” (I am not entirely sure what that means, but it is clearly not a compliment.) One neuroscientist, speaking at a conference in 2012, was even more direct. “It’s crap,” he said.

  Other critics, however, prefer to follow the lead of the famous physicist Niels Bohr, who once told a colleague, “We are all agreed that your theory is crazy. The question that divides us is whether it is crazy enough to have a chance of being correct.” Perhaps, some think, Kurzweil is not being crazy enough. A 2012 survey of crystal-ball gazers found that the median date at which they a
nticipated a technological Singularity was 2040, five years ahead of Kurzweil’s projection; while Henry Markram, the neuroscientist who directs the Human Brain Project, even expects to get there (with the aid of a billion-euro grant from the European Union) by 2020.

  But when we turn from soothsaying to what is actually happening in laboratories, we discover—perhaps unsurprisingly—that while no one can predict the detailed results, the broad trend does keep moving toward the computerization of everything. I touched on some of this science in my book Why the West Rules—for Now, so here I can be brief, but I do want to note a couple of remarkable advances in what neuroscientists call brain-to-brain interfacing (in plain English, telepathy over the Internet) made since that book appeared in 2010.

  The first requirement for merging minds through machines is machines that can read the electrical signals inside our skulls, and in 2011 neuroscientists at the University of California, Berkeley, took a big step in this direction. After measuring the blood flow through volunteers’ visual cortices as they watched film clips, they used computer algorithms to convert the data back into images. The results were crude, grainy, and rather confusing, but Jack Gallant, the neuroscientist leading the project, is surely right to say, “We are opening a window into the movies in our minds.”

  Just a few months later, another Berkeley team recorded the electrical activity in subjects’ brains as they listened to human speech, and then had computers translate these signals back into words. Both experiments were clumsy; the first required volunteers to lie still for hours, strapped into functional magnetic resonance imaging scanners, while the second could only be done on patients undergoing brain surgery, who had had big slices of their skulls removed and electrodes placed directly inside. “There’s a long way to go before you get to proper mind-reading,” Jan Schnupp, a professor of neuroscience at Oxford University, concluded in his assessment of the research, but, he added, “it’s a question of when rather than if … It is conceivable that in the next ten years this could happen.”

 

‹ Prev