Military Misfortunes

Home > Other > Military Misfortunes > Page 3
Military Misfortunes Page 3

by Eliot A Cohen


  Not only was the high command confronted by a novel environment; it was also imprisoned in a system that made it well-nigh impossible to meet the challenges of trench warfare. The submissive obedience of Haig’s subordinates, which Forester took for blinkered ignorance and whole-hearted support, was in reality the unavoidable consequence of the way in which the army high command functioned as an organization under its commander in chief. A personalized promotion system, built on the bedrock of favoritism and personal rivalry that had characterized the pre-1914 army, ensured that middle-ranking officers undertook offensives of no tactical or strategical use whether they believed in them or not: If they obeyed orders, they could hope for promotion, but if they did anything else they faced the certainty of removal and disgrace. The way Haig ran his headquarters, preserving an Olympian detachment, tolerating no criticism, and accepting precious little advice, reinforced the rigidity of the system.24 That system was itself a product of a different age and a different army, and was no longer appropriate to the circumstances. But rather than change it, Haig and his fellow commanders preferred to rely on the traditional tools of the general—men and guns—in ever-larger quantities. In Tim Travers’s words:

  The British army’s reaction to the emergence of modern warfare was therefore a conservative reflex, perhaps because full accommodation to machine warfare would have required social and hierarchical changes with unforeseen consequences.25

  Our excursion into the controversy over the talents—or otherwise—of the First World War generals shows very clearly that the most promising line of inquiry into the roots of military misfortune is not to issue imprecise blanket condemnations of the supposed deficiencies of the “military mind,” but to look more closely at the organizational systems within which such minds have to operate. Leveling the same charge at quite different men, operating in differing circumstances, and at different times, is not an aid to analysis but a barrier against it. Indeed, the essential uselessness of the concept of the “military mind” for our purposes is evident in the fact that it is rarely offered as the reason behind the setbacks of World War II, whereas the opposite is the case with the war of 1914–18. If any further proof were needed that this is not the answer, it surely lies in the fact that generals such as Pétain, Foch, and even Haig performed much better in 1917–18 than they did in the earlier years of the war. They could not have done so had they been encumbered with permanent mental blinkers.

  Institutional Failure

  Sometimes, in cases in which it is self-evidently absurd to pin the blame for military failure on a single individual, entire institutions have been held responsible. Thus the United States Navy as a whole has been blamed for its failure to adopt the practice of convoying in 1942, and the whole French army indicted for the collapse of France in 1940. At first glance, this looks like another cry of analytical despair: If no one can be blamed, everyone must be at fault. But nevertheless something useful can be gleaned from this approach, for it seeks to explain failure in collective rather than in individual terms. However, the difficulty of attempting to explain military misfortune by putting an entire institution in the dock can be clearly illustrated by looking briefly at the case of the French army at the outbreak of the First World War.

  Operating according to a tactical doctrine developed by General Ferdinand Foch, who believed that morale was stronger than firepower, the French army went to war in 1914 believing that charges by massed ranks of infantry with artillery support could overwhelm the defensive power of magazine rifles and machine guns. The result was a disaster, and the French lost some five hundred thousand casualties during August 1914 in the process of discovering that their cherished tactical doctrine was fatally flawed.

  What is the explanation for this failure, in which many parties were involved? One historian has claimed that the fatal disregard of firepower was the expression of “a long tradition of French intellectual arrogance.”26 Another has discovered a collective lack of brains, arguing that the intellectual quality of the whole French officer corps was in decline from the close of the nineteenth century, thereby rendering it liable to faulty decision making.27 More recently a third has suggested that the French army adopted the offensive with such enthusiasm because it conformed to organizational ideology and institutional aims. Belief in the offensive protected the standing regular army and created suspicion and doubt about the capabilities of reserve forces composed of civilians who had undergone only brief periods of military instruction and were held to be incapable of the disciplined tactical maneuvers necessary to carry out successful attacks.28

  All these arguments collapse when a comparative dimension is added to the inquiry. In 1914 all major armies believed more or less equally in the efficacy—and necessity—of the tactical offensive, regardless of whether they were composed largely of conscripts, like the Russian army, or entirely of regulars, like the British; and of whether they believed in the strategic offensive, like the Germans; or the strategic defensive, like the Italians.29 This being so, we cannot really accuse the French of being more arrogant or more stupid than anyone else.

  Paradoxically, the idea of institutional failure draws some strength from its counterpart, the study of institutional success. Here, the imagination of historians has been transfixed by the competence of the German army. Ignoring the unfortunate outcome of both World Wars (from the German point of view), many writers have probed for the secret of German success on the field of battle. Some have found the answer in the higher direction of the German army, and in particular in the excellence of the German general staff system.30 Others believe that it lies in less-elevated strata, and that a greater spirit of professional dedication among junior officers, more rigorous and effective training, a closer attention to tactical doctrine, a high degree of institutional integration, and a willingness to subject both successes and failures to close critical examination have been the real sources of German fighting power.31 There are many possible explanations for the tactical and operational virtuosity of the Germans in the first half of the twentieth century, but (perhaps for that very reason) no consensus exists as to which ones were the most significant.

  This kind of analysis is more fruitful than the traditional pastime of garlanding heroes and castigating villains. Nevertheless, looking at military forces as institutions is not entirely helpful in explaining why some of them sometimes fail because it directs us toward their distinctive social characteristics and seeks to identify special features or traits that make one army—or navy, or air force—different from another. Even supposing that we can identify these qualities correctly (which is difficult), and that they are truly unique (which is often not the case), this does not mean that we have found the cause of misfortune. For though we may now know what the institution is, we do not know how it works. To do this, we must think of armed forces not as institutions but as organizations.

  Cultural Failure

  One more form of blanket condemnation remains to be considered: putting a whole nation, rather than its army, navy, or air force, in the dock. This kind of national character assassination has been justified on the grounds that “certain qualities of intellect and character occur more frequently and are more frequently valued in one nation than in another.”32 Finding a firm analytical basis for such prejudices is well-nigh impossible.33 For one thing, no one has yet succeeded in setting out supposed national characteristics in anything like a sophisticated and acceptable form. For another, this kind of activity ignores the fact that the laurels of success are not selectively conferred by some celestial divinity on a favored people or peoples. Disaster is not the inescapable fate of any nation. If anything can be said about the importance of cultural stereotypes, so far it does not amount to much more than the injunction not to underrate your enemy.34

  LESSONS FROM CIVILIAN LIFE

  Disaster Theory

  Failure is by no means a monopoly of the military. Since the Second World War, a good deal of effort and energy has been exp
ended in analyzing civil disasters; and these related studies have much to tell us about how to tackle our problem.

  Disaster study really began in the years following 1945. It was heavily funded by the American federal government, in the hope that it would be able to come up with ways to minimize the effects of a nuclear strike on the civil population. Teams of sociologists went to work to generate the necessary data, and their field studies generally consisted of a detailed examination and analysis of what was involved in the task of clearing up after disaster had struck.35 This concentration on practical matters contrasted with a general dearth of broad covering explanations about why disasters happen in the first place. The early literature virtually ignored this problem, and a rare foray into the question of causation concluded that it was impossible to construct a single theory of disaster that could encompass all the many sociopoliticopsychological variables involved.36 However, an important step forward occurred when theories of cognition began to be incorporated into disaster study—although contemporary analysts did not realize their true significance. Armed with the blessings of hindsight, it is fairly easy to see in civil disasters chains of cause and effect (invisible to contemporaries) whose interruption would have prevented disaster occurring at all. Thus, “disaster-provoking events tend to accumulate because they have been overlooked or misinterpreted as a result of false assumptions, poor communications, cultural lag and misplaced optimism.”37 As we shall see, a direct parallel exists in the military world, where failure can arise from inadequate or imperfect anticipation of an enemy’s actions.

  The lack of any general theory of disaster has not meant that accidents and catastrophes have remained puzzlingly inexplicable, for an easily identifiable culprit is always around to take the blame: human error. It would be foolish to deny that human error is clearly a major ingredient in the making of many disasters. Only human beings make decisions, and wrong decisions can easily result in misfortune. But the more one unravels the causes of disaster, the less satisfactory an explanation human error turns out to be—at least in the simple and straightforward sense of pinning the blame on someone.

  An example of where the hunt for human error can lead was the debate over the cause of the racetrack accident at Le Mans in June 1955, in which seventy-seven people died. The disaster occurred when one driver sheered into the crowd after being overtaken by another. At first the drivers were blamed. Then the mechanical features of the fatal car were the subject of critical scrutiny. From there the debate widened to take in the characteristics of the different nationalities whose representatives were most directly involved in the crash, the construction of the track, the organization of that particular race, the rules governing all motor racing, and finally the rationale of racing in general.38 By concentrating on human error, this inquiry into the causes of a particular disaster lost itself in an inescapable maze of unanswerable questions.

  What the Le Mans inquiry reveals—although it may have gone too far down this path—is that “operator error” may not be the sole or even the prime cause of disaster. To illustrate this, we can cite the Chernobyl incident, which took place on April 26, 1986. Although the plant’s operators had repeatedly violated established safety procedures, at least two other factors contributed to the genesis of disaster. Soviet reactor design depended to a large extent on written instructions to the operators to ensure that the reactor remained in a safe condition, rather than on built-in engineering safeguards; and it was readily possible to carry out tests without proper authorization and supervision.39 Clearly errors had been made at a number of levels. But things can sometimes go badly wrong in circumstances where there has been no obvious operator error.

  At 4:00 A.M. on March 28, 1979, a serious malfunction in the non-nuclear part of the Three Mile Island nuclear reactor triggered a series of automated responses in the cooling system. During this process, a relief valve on the top of a pressurizer became jammed open. For over two and a quarter hours—abetted by an inadequate warning system that failed to register that the valve was stuck open and instead signaled that a switch had been thrown that ought to have closed it—operators misread the symptoms, turning off an automatic cooling system and thereby allowing the reactor core to become partially uncovered. Another twelve hours passed before plant crew and service engineers agreed on an effective course of action to overcome the results of these errors. Meltdown was avoided when an operator joining the emergency team correctly deduced that the pressurizer relief valve had jammed open. At that moment disaster was only sixty minutes away.

  The independent investigatory commission that examined the Three Mile Island incident found many areas of human error that had contributed to creating a “disaster environment”:

  Licensing procedures were not entirely adequate, giving rise to some deficiencies in plant designs. Operator training was totally inadequate for emergencies, and poorly monitored. Control rooms were often designed with precious little attention to the operator’s needs. The lessons learned from malfunctions and mistakes at nuclear plants both here and abroad were never effectively shared within the industry.40

  Once the incident had begun, design failure and psychological predisposition combined to make things worse: Operators elected to believe the misleading indicator light but disbelieved a series of ominous readings from other instruments that indicated that something was going badly wrong.

  From this brief account it is obvious that Three Mile Island was anything but a straightforward case of operator error. Rather, it was a complex accident in which a number of factors combined in unforeseen and unexpected ways. The Nuclear Regulatory Commission’s report acknowledged this complexity. “While there is no question that operators erred . . . ,” it concluded, “we believe there were a number of important factors not within the operators’ control that contributed to this human failure. These include inadequate training, poor operator procedures, a lack of diagnostic skill on the part of the entire site management group, misleading instrumentation, plant deficiencies, and poor control room design.”41

  These examples are important to our inquiry in two respects. First, they confirm the wisdom of our earlier rejection of the “man in the dock” theory as an explanation of military misfortune. Second, they emphasize the fact that—in peace and in war—men operate in environments in which events are only partly the result of controlled decisions taken by the person “in charge.” To go beyond this, and penetrate further into the complex world of misfortune and disaster, we must turn elsewhere for guidance.

  Failure in Business

  Like their military counterparts, businessmen, professors of business administration, and their students do not appear to enjoy discussing failure.42 Like their counterparts, business journalists relish the opportunity to tell the tale of how greed and ineptitude lead businesses to produce products that do not sell, to undertake projects that cannot work, to pile up debts that cannot be repaid. A few authors, however, have produced a literature on business failure that repays some attention.43

  Most business failures take the form of the collapse of small, young companies—the equivalent of the explainable military failures we discussed in the previous chapter. There are other cases, however, which closely approximate military misfortunes: a spectacular failure of a large and competent organization in a major undertaking—the business equivalent of a military campaign. One particularly good example of this is the story of the Edsel, the car introduced by Ford with much fanfare in 1957, which failed miserably and was withdrawn from the market within two years.44 As usually told, the story of the Edsel is that of an organization that deceived itself through the use of pseudoscientific public opinion surveys, making a mockery of the techniques of market research and itself in the process.

  The case of the Edsel, however, becomes far more puzzling (and interesting) when placed in the larger context of the Ford Corporation’s performance since World War II. After its initial glorious period under its founder, Henry Ford, whose Model T
became synonymous with the popularly owned automobile, Ford underwent a long period of decline, during which it was outstripped by its major American competitors. It was only after World War II that its recovery began—a recovery under way at the time of the Edsel fiasco. Indeed, after the bruising experience of the Edsel (which some estimate cost Ford as much as $350 million, although that figure is probably too high) Ford produced one of its most successful cars ever, the Thunderbird.

  The more careful studies of the Edsel failure reveal that its sources lay in the confluence of several different kinds of factors. One set of problems involved particular tactical choices made by different managers: a confused pricing policy, a publicity campaign that created excessive expectations, and a design that was not terribly alluring. Another set of problems stemmed from erratic quality control—a deeper-seated problem in Ford cars that plagued the company for some years; many of the first Edsels had defects (most of them minor) that contrasted sharply with the image created by the public relations men. Tactics was not the problem here; organization of production was. Another organizational problem was the company’s decision to create a completely new division to handle the Edsel, a division immediately thrust into competition with the other Ford divisions for resources and outlets, as well as with the divisions of other American car manufacturers.

 

‹ Prev