The Last Warrior

Home > Nonfiction > The Last Warrior > Page 9
The Last Warrior Page 9

by Andrew F. Krepinevich


  It is hard to say whether Goldhamer and Marshall’s “The Deterrence and Strategy of Total War, 1959–1961” (RM-2301), assuaged concerns within the American national security community about the seeming contradiction between nuclear deterrence and nuclear war fighting if deterrence failed. At the time, US deterrence of a nuclear war with the USSR was predicated on the threat of massive nuclear retaliation. But the threat had to be credible and, as Eisenhower recognized as early as 1953, carrying it out was tantamount to national suicide. From Marshall’s perspective, however, the enduring value of RM-2301 was that it represented the beginning of his search for better analytic methods for evaluating military competitions.

  In the early 1970s during the second Nixon administration, both Marshall and his close friend, James Schlesinger, would return to the dilemma between nuclear deterrence and nuclear war-fighting—Schlesinger as secretary of defense and Marshall as the director of net assessment. In January 1974 Nixon would approve National Security Decision Memorandum 242 (NSDM-242), which directed the development of a wide range of limited, selective, and regional nuclear options that could achieve early “war termination, on terms acceptable to the United States and its allies, at the lowest level of conflict feasible.”85 The aim was give the president other options than a stark choice between surrendering to Soviet aggression or executing the Single Integrated Operational Plan, the American war plan for all-out nuclear war. Although Marshall, Schlesinger, and many others sought plausible options between these extremes from the mid-1970s on, the problem proved insolvable even for the very best strategists. At the time of RM-2301, though, this reality was far less obvious than it would later become. In the meantime Marshall and Goldhamer’s investigation of the tension between nuclear deterrence and war-fighting also foreshadowed the pessimistic conclusions Marshall would draw in the mid-1960s about the ability of analysts to estimate the relative military power between nations.

  * Cohen is generally credited with inventing the neutron bomb, a fission-fusion weapon designed to kill primarily with radiation rather than heat and blast.

  * Later, in October 1961, the Soviets successfully tested a hydrogen bomb dropped from a Tu-95 Bear bomber that yielded some 50 megatons. This weapon, RDS-220, also known as Tsar Bomba, was actually a 100-megaton design. Like RDS-37, RDS-220 was intentionally derated for the test on Novaya Zemlya Island. Even so, the blast caused damage as far as 1,000 kilometers away, breaking windows in Finland and Sweden.

  * The first operational US Air Force H-bomb was the MK-17. This weapon weighed over 41,000 pounds and was soon succeeded by the “lightweight” MK-15, which weighed less than 8,000 pounds.

  * COMINT remained a rich intelligence source through the late 1950s, but declined rapidly after the 1960 defection to the USSR of two National Security Agency cryptologists, Bernon Mitchell and William Hamilton Martin.

  * A difference between the US and Soviet acquisition systems was that in the USSR design and production were carried out by different organizations, whereas in the United States both were usually done by the same defense contractor.

  3

  THE QUEST FOR BETTER ANALYTIC METHODS 1961–1969

  If you think you are in the business of giving answers, you will get the diagnosis wrong, because your people are going to have preconceptions about what the answer is.

  —ANDREW MARSHALL

  During the 1960s Marshall’s intellectual outlook matured on a number of issues. He became ever more convinced that RAND needed to look beyond systems analysis and undertake basic research on the behavior of large organizations such as the US Air Force and private-sector business firms. He also articulated the general problems of measuring relative military power between nations. These problems, which have not been fully solved to this day, have continued to fuel Marshall’s enduring interest in developing better analytic methods. And, most important, by the end of the decade he had developed a framework for thinking about the United States’ long-term peacetime military competition with the USSR that no longer focused, as most prior Cold War analyses had done, on the possible outcomes of an all-out nuclear exchange between the two nations.

  Marshall’s views on these matters were not always embraced by many of his RAND colleagues, especially those outside the economics department. By the late 1960s many of his fellow researchers had become increasingly caught up in debates over how best to wage the Vietnam War, but Marshall’s focus remained primarily on the challenges of helping the United States be a more effective competitor vis-à-vis the USSR in what had become a protracted rivalry. This, for him, was the overriding strategic challenge facing the United States. The 1960s were a decade in which he followed his own instincts and went his own way to develop the basic elements of what would become known as net assessment.

  Before turning to the main intellectual paths Marshall pursued during the 1960s, a few words are in order about RAND’s involvement in bringing program budgeting and systems analysis to the Pentagon. This analytic revolution was led by President John Kennedy’s defense secretary, Robert McNamara. McNamara had been enamored with the use of statistics and quantitative analysis to manage large organizations since earning a master’s degree from the Harvard Business School in 1939. He began teaching business administration there in 1940, and was soon drawn into the efforts of the Army Air Forces (AAF) to use statistical controls to manage the United States’ air arm during World War II.

  Within months of the Japanese attack on Pearl Harbor Robert Lovett, Henry Stimson’s assistant secretary of war for air, realized that “there was no centralized administrative control over statistical reporting and analysis in the Army’s air arm.”1 In March 1942 Lovett established the Directorate of Statistical Control within the Air Staff and put Charles “Tex” Thornton in charge. Thornton had impressed Lovett with his ability to draw essential information from a mass of statistics and to present it clearly, as demonstrated in a report on federal housing that he had written in the late 1930s while working at the Interior Department. Lovett gave Thornton carte blanche to build a statistical control organization. The underlying idea behind the directorate was that the war should be conducted as if it were a form of “big business” with a strict accounting of gain and loss.2 By the war’s end Thornton’s empire had grown to over fifteen thousand employees, including more than three thousand who served with AAF commanders in the field.

  At the outset in 1942, though, Thornton’s first task was to assemble a staff of officers skilled in quantitative methods. He promptly struck a deal with Wallace Dunham, dean of the Harvard Business School, to set up a course to do the training.3 The initial cadre of some one hundred “citizen-soldiers,” all of whom were personally recruited by Thornton for their experience in business, banking, and data processing, reported to Soldiers Field in Cambridge, Massachusetts, in June 1942.4 McNamara became one of the initial instructors in the AAF’s statistical school, but in 1943 he took unpaid leave from Harvard to go on active duty—and quickly found himself serving in Thornton’s directorate.5 Later McNamara also served in General LeMay’s bomber commands in China and the Marianas. There he established a statistical control unit for LeMay’s B-29 operations and played a supporting role in LeMay’s development of firebombing tactics against Japanese cities.6

  McNamara’s hands-on experience in the use of statistics and analysis to manage large organizations expanded after World War II when he was hired by the Ford Motor Company along with Thornton and other veterans of the Statistical Control Division. In late 1945 Thornton formed a management group with nine former officers from his wartime organization. With support from Lovett he quickly sold the entire group to Henry Ford’s grandson, Henry Ford II.7 At the time, the iconic Ford Motor Company was in dire need of financial management. After production of Ford’s highly successful Model T had ended in 1927, the company had been slow to bring out new and different models. By the beginning of World War II Ford’s market share had fallen to less than 20 percent and losses had offset all its p
rofits from 1927 to 1941.8 Henry Ford’s grandson was understandably eager to make his mark by reinvigorating the company, and Thornton’s group promised to restore the company to its leadership position. To give a sense of the task Thornton faced, when he arrived at Ford he was shocked to find that the only financial data available on Ford’s operations consisted of the cash statement provided by the company’s bank.

  The former AAF officers who arrived at Ford in January 1946 were all very young and very bright compared to most of the company’s employees. Knowing nothing about the auto industry, the newcomers asked so many questions that the staff initially dubbed them the “Quiz Kids.”9 By 1959 the seven of Thornton’s original group still at Ford, including McNamara, were “largely in control of the corporation.”10 The “Quiz Kids” had become the “Whiz Kids,” the sobriquet now an accolade for their success in bringing order out of the chaos that had existed at Ford before their arrival.11

  Ironically Thornton himself did not last long at Ford. In 1948 Henry Ford II fired him due to his clashes with Ford executive Lewis Crusoe.12 But the seven Whiz Kids who stayed the course achieved such success in reinvigorating Ford’s fortunes, using statistical controls—“management by the numbers,” as it would come to be known—that McNamara was named president of the Ford Motor Company on November 9, 1960, the day after John Kennedy won the presidency.

  Following the election, the president-elect first offered the job of defense secretary to Lovett. During World War II Lovett had distinguished himself as assistant secretary of war for air. After the war, he had served as deputy defense secretary under George Marshall from October 1950 until September 1951. Then, as secretary of defense himself, he headed the Pentagon until the end of the Truman administration in January 1953.

  Lovett declined Kennedy’s offer to return to the Pentagon for a second tour. Instead he suggested McNamara for the post. Kennedy offered McNamara his choice of two positions: either secretary of the treasury or secretary of defense. McNamara initially declined the offers, but finally agreed to think about them and meet later with the president-elect. By the end of McNamara’s second meeting with Kennedy, he was so impressed with the new commander in chief that he agreed to take the post at defense.13

  Kennedy was convinced that the Pentagon’s strategic planning was not appropriately reflected in its budget priorities. So when McNamara arrived at the Pentagon he had a clear charter to implement the changes necessary to bring this about.14 While he had the authority and responsibility to make sound decisions on the crucial issues of national security, such as defense strategy and service acquisition programs, McNamara lacked the management tools that would enable him to do so.15 To help him wrest control of the budget from the military services he hired Marshall’s mentor and friend, Charles Hitch, as the Pentagon’s comptroller.

  By the early 1960s Hitch was one of the nation’s leading authorities on program budgeting and the use of quantitative systems analysis to choose the most cost-effective weapon systems and force postures. His initial tasking from McNamara was to develop the statistical information and management systems McNamara needed to gain greater control over the military services and the Joint Chiefs of Staff. McNamara was further aided by other RAND staff members brought in to serve in the Office of the Secretary of Defense (OSD) to help bring about the desired reforms.

  RAND’s development of systems analysis had its origins in the successes of operations research (OR) during World War II. The British experimental physicist Patrick M. S. (“PMS”) Blackett is considered the father of OR, which draws on statistical methods to aid ongoing military operations. For example, in early 1943, during the Battle of the Atlantic in which German U-boats threatened to sever Britain’s logistic lifeline across the Atlantic, the use of OR tools and techniques by Blackett’s team at the Royal Navy’s Coastal Command led to the adoption of convoys larger than the British Admiralty’s limit of sixty merchantmen.16 The larger convoys, in turn, helped reduce shipping losses when the U-boats did intercept them.

  World War II OR used mathematical analysis to improve current operations. Efficient search patterns, for instance, were amenable to mathematical analysis. In the late 1940s and throughout the 1950s RAND’s staff built on wartime OR to help the Air Force make better, more cost-effective choices regarding its force posture, particularly with respect to the numbers and types of bombers it should field.17 Ed Paxson organized RAND’s first major analysis of a prospective air campaign against the Soviet Union and coined the term systems analysis to distinguish this broader kind of study from wartime operations research.18

  Once in the Pentagon the first order of business for McNamara and Hitch was to centralize control over the Pentagon’s budget. The existing system had no programming function to bridge the gap between service budgets and military planning, nor could it provide the information McNamara needed to link missions to costs.19 To remedy these problems Hitch and his staff created what became known as the Planning, Programming, and Budgeting System (PPBS), which introduced a programming function to relate plans to budgets. Given six months by McNamara to implement PPBS rather than the eighteen months Hitch initially proposed, the new comptroller and his staff managed to prepare the Fiscal Year 1963 defense budget using the new system and submit it to Congress in January 1962.20 Its limitations notwithstanding, PPBS has proved to be a major management innovation, and is used by the Defense Department to this day.

  To provide the underlying cost-effectiveness analyses PPBS required to inform major decisions on forces and weapons programs, McNamara and Hitch introduced systems analysis. Hitch asked Alain Enthoven, who had left RAND for the Pentagon in 1960, to set up an Office of Systems Analysis (OSA) within Hitch’s office. OSA’s purpose was to conduct cost-effectiveness studies aimed at quantifying alternative ways of accomplishing various national security objectives so that senior decision makers could understand which of them contributed the most to a given objective for the least cost; that is to say, those that were the most cost-effective.21 This innovation, too, has endured the test of time, although the office responsible for the function itself has undergone several name changes.22

  By 1965 Hitch concluded that the programming function in PPBS had been generally well received within the Pentagon. The reaction to OSA’s cost-effectiveness studies was another matter. Systems analysis was controversial then and remains so to this day. As an economist, Hitch found the resentment of the uniformed military over cost-effectiveness studies deeply puzzling. After all, as he and Roland McKean had noted in 1960, “Resources are always limited in comparison with our wants, always constraining our action. (If they did not, we could do everything and there would be no problem of choosing preferred courses of action.)”23 Thus there was, in Hitch’s mind, a clear need to determine how military tasks could be accomplished at the required level of effectiveness for the lowest possible cost.

  Hitch attributed the controversy over systems analysis studies to a belief among military officers that the analysts would favor the least costly weapons rather than those that offered the greatest effectiveness on the battlefield.24 He was wrong. The services’ problems with systems analysis ran much deeper. The fact of the matter was that McNamara, Enthoven, and other Pentagon Whiz Kids used systems analysis to justify choices that often went against the vested interests and professional judgments of senior military leaders. In effect systems analysis transferred decision-making on key investment choices from the military services to the Office of the Secretary of Defense.

  Marshall was not directly involved in the controversies inside the Pentagon over systems analysis during the 1960s. However, he was concerned about what he considered to be the excessive reliance on systems analysis, especially within RAND’s strategic studies program. While he was willing to employ quantitative methods himself to explore such issues as counter-value versus counterforce targeting, he could see the detrimental effects that overly narrow cost-effectiveness studies were having on RAND’s efforts to assess th
e US-Soviet competition in strategic (or intercontinental-range) nuclear forces.

  At the heart of Marshall’s misgivings about systems analysis was his growing awareness that decisions about nuclear forces, whether made in Washington or in Moscow, could be influenced by the vested interests of various decision makers and the various bureaucratic power centers. In the late 1950s he and Loftus began paying attention to the influence that Soviet organizations from the Politburo down to design centers had on the choices the USSR made about its nuclear forces. They had started Project SOVOY to encourage RAND analysts to begin taking these sorts of organizational considerations into account. By the early 1960s this line of thought led Marshall and a few others at RAND to begin advocating an explicit effort to develop analytic methods beyond systems analysis. One of Marshall’s strongest supporters for moving strategic analysis in this direction proved to be a young economics professor from the University of Virginia by the name of James Schlesinger.

  From RAND’s earliest days a recurring concern of the organization’s management had been recruiting top talent, the best people in the entire country to work on any given problem. One of the ways in which this was accomplished was by inviting promising individuals to spend a summer at RAND working with members of the think tank’s staff. In 1962 one of the summer invitees was Schlesinger, who had come to RAND’s attention as a result of his 1960 book The Political Economy of National Security.25 It included a chapter comparing Soviet economic growth to that of the United States, a subject that was of interest to Marshall.

 

‹ Prev