Book Read Free

Building the Great Society

Page 7

by Joshua Zeitz


  The growth of the federal state during the New Deal and World War II pushed the boundaries of how actively the federal government could manage the nation’s economy by pulling the fiscal and monetary levers available to the executive branch. Many postwar liberals came to believe as an article of faith that through careful application of Keynesian economics expert bodies like the Council of Economic Advisers—created in 1946—could calibrate government spending to ensure sustained growth. Unlike the Roosevelt administration’s halting embrace of countercyclical spending in 1935 and 1936—followed by its return to budget consciousness in 1937, which produced the “Roosevelt Recession”—postwar Keynesians did not view deficits just as temporary instruments to fight economic down cycles. Instead, they believed that such measures should be employed even in relatively flush times to drive the economy to full capacity. In this belief, they rejected as arbitrary and unscientific the idea that balancing the budget in a specific calendar year should outweigh the more farsighted work of creating full employment and output. Once the economy operated at peak capacity, tax revenues would erase whatever temporary deficits were incurred to secure growth.

  Liberals needed little convincing that through expert management of the economy they could achieve full employment and low inflation. Signs of prosperity were seemingly in rich abundance everywhere. Richard Hofstadter, an influential historian and public intellectual, spoke for many on the center left when he observed that “a large part of the New Deal public, the jobless, distracted, and bewildered men of 1933, have in the course of the years found substantial places in society for themselves, have become homeowners, suburbanites, and solid citizens.” Over 7.8 million Americans availed themselves of the G.I. Bill of Rights to attend college and vocational schools, and millions more benefited from G.I. housing loans that propelled them into suburban splendor. It was a decade when bulldozers and cement mixers swiftly transformed vast reaches of American farms and forests into so many streets and cul-de-sacs, each lined by rows of brand-new white Cape Cod homes, and when increasing numbers of middle-class and working-class employees enjoyed previously unimaginable benefits like annual cost-of-living adjustments to their wages and salaries, employer-based health insurance, paid vacations, and private pensions. Capitalism, which in recent memory seemed to have run its full course, was now functioning with great efficiency. This point, in turn, led many Democrats to rethink some of their long-standing ideas about policy and politics.

  During the Depression, many liberals had advanced radical reforms to break up banking and industrial interests and reallocate resources; now they envisioned a consensus culture that was essentially classless in nature. Unbounded growth extended the promise of unlimited prosperity. If in the 1930s many liberals cheered mass strikes and demonstrations as necessary instruments to spur a redistribution of wealth, by the 1950s they regarded mass politics warily, particularly when it took the form of radical anticommunism, which seemed eerily parallel to European fascism of very recent memory. The sociologists David Riesman and Nathan Glazer captured this paradox neatly when they observed that many intellectuals felt a closer kinship with Wall Street, which was more tolerant of “civil rights and civil liberties”—two postwar liberal priorities—than with their “former allies”—“the farmers and lower classes of the city.”

  Firm in their belief that America had solved the twin problems of poverty and economic stability, liberal writers in the 1950s muted their criticism of capitalism and, in an ironic twist, fixed much of their criticism on how members of the once-impoverished working class—now comfortably ensconced in middle-class suburbs, leading happy middle-class lives—chose to spend their money. They deplored the “techno-burbs,” with their “haggard men” all catching the same train each morning, each wearing the same gray flannel suit as the other, coming home to his “tense and anxious” wife and “gimme kids.” Prosperity, wrote the critic Dwight Macdonald, had overwhelmed high culture and reduced the United States to an ugly amalgam of “Masscult and Midcult”: the former embodied by the intellectually flimsy output of Disneyland, dime-store fiction, and commercial television; the latter, by faux intellectualism of the sort on display in the pages of the Saturday Evening Post or Book of the Month Club selections (offensive because it “pretends to respect the standards of High Culture while in fact it waters them down and vulgarizes them”). In a representative demonstration of liberal derision, a writer for the Nation sneered that the typical Middle American “buys the right car, keeps his lawn like his neighbor’s, eats crunchy breakfast cereal, and votes Republican.” Other critics fretted that affluence and suburban banality had stripped people of their personal agency, thus generating “alienation,” a mass “identity crisis,” an “age of anxiety,” a “lonely crowd” composed of formerly autonomous, “inner-directed” individuals who had been reduced to “out-directed” conformists. Whereas liberals in the age of Franklin Roosevelt celebrated the ordinary worker and endeavored to secure for him or her a modicum of economic security, twenty years later they tended to worry more about the spiritual and cultural impoverishment that accompanied affluence.

  This shift in outlook strongly influenced the development of the liberal agenda in the lead-up to the 1960s. The economist John Kenneth Galbraith represented the emerging liberal consensus when he argued that America should no longer direct resources to an ever-greater output of consumer goods to drive growth. In his award-winning and influential book, The Affluent Society, he argued that the economy had grown sufficiently that its surplus should be redirected away from crass commercial trappings to investments in education, infrastructure, and research and development. The political journalist Richard Rovere echoed this thinking when he observed that the country gained little from the “production, distribution, and consumption of trashy things, the creation of trashy houses and landscapes, the dissemination of trashy education and ideas.”

  In the 1950s, liberal intellectuals and policy makers pressed successfully for the expansion of Social Security, which until mid-decade excluded people who worked in domestic service, hotels and laundries, agriculture, or local government service. They also championed legacy components of Harry Truman’s Fair Deal platform, including federal aid to education, public health insurance, and civil rights protections for African Americans. But many now also took up Galbraith’s call to arms. Frank Church, a Democratic senator from Idaho, worried that America might soon become “a kind of modern Babylon of private plenty in the midst of public poverty.” The country, lamented his colleague Mike Mansfield, was like a massive ocean liner “frozen in a dangerous course and with a hull in pressing need of repair.” In effect, the United States was squandering its riches on frivolous things, at the expense of the public interest. Going forward, American liberalism would concern itself with the spiritual as well as the material well-being of its citizens.

  It is little wonder that the image of the supermarket—that most iconic symbol of postwar plenty—haunted the liberal mind. “With the supermarket as our temple, and the singing commercial as our litany, are we likely to fire the world with an irresistible vision of America’s exalted purposes and inspiring way of life?” Adlai Stevenson lamented in the pages of Time magazine in 1960. (Ironically, the table of contents that teased his article was sandwiched between advertisements for shiny new refrigerators stocked with a surfeit of food.) Writing just months later for Esquire, Norman Mailer conceded that “not all the roots of American life are uprooted, but almost all, and the spirit of the supermarket, that homogeneous extension of stainless surfaces and psychoanalyzed people, packaged commodities and ranch homes, interchangeable, geographically unrecognizable, that essence of a new postwar SuperAmerica,” represented the hollowness of contemporary life in the United States. Mailer’s essay—“Superman Comes to the Supermart”—tapped into a popular metaphor, for as Life magazine observed more favorably, the spirit of America was best captured by an image of suburban shopping carts, “cornucopias filled
with abundance that no other country in the world has ever known.”

  In his poem “Superman,” John Updike embraced abundance for what it was—all present and all consuming. His narrator, the archetypal middle-class American, drove each day to the “supermarket,” parked in a “superlot,” and bought “Super Suds.” Most self-aware contemporaries would readily have appreciated the wry social commentary.

  The liberal worldview assumed that a rising economic tide could and should lift all boats. Indeed, most critics and reformers agreed that America was an economic powerhouse; if only it were possessed of a more public-minded spirit, it could accomplish great things. Armed with this faith that the economy would grow in perpetuity under expert guidance—yet predisposed to believe that Americans were directing their energy and resources in the wrong direction—liberal intellectuals and policy experts in the early 1960s could conceivably believe they were armed with the tools to fight and win an “unconditional war on poverty.” All they needed was to be shown that it still existed.

  • • • • •

  “People are poverty stricken when their income, even if adequate for survival, falls radically behind that of the community,” Galbraith wrote in his influential book. This idea—that in an affluent society, most poor people had enough to get by and were thereby “poor” only by comparison to their more fortunate neighbors—interested the Democratic senator Paul Douglas of Illinois, a former economics professor who suspected that Galbraith understated the magnitude of the problem. Despite the fundamental strength of the American economy, he worried that many people still lived below an objective poverty line. As co-chairman of the Joint Economic Committee, Douglas hired Robert Lampman, a young economist at the University of Wisconsin, to dive deeper into the subject. The result, a report titled “The Low Income Population and Economic Growth,” found that poverty remained a persistent challenge and that wealth and income distribution had grown more unequal in recent years. The study resonated with policy-minded liberals. In 1961, Heller invited the author—whom he knew casually from academic circles—to join the staff of the Council of Economic Advisers. For two years, Lampman worked primarily on other issues, but in early 1963 he and Heller saw an opening. The year before, the writer and socialist organizer Michael Harrington published an arresting volume on American poverty. Titled The Other America, it argued that upwards of fifty million people—over a quarter of the population—lived in a “system designed to be impervious to hope.” The “other America” was “populated by the failures, by those driven from the land and bewildered by the city, by old people suddenly confronted with the torments of loneliness and poverty, and by minorities facing a wall of prejudice.” Largely “invisible” to members of the prosperous middle class, other Americans were trapped in a national “ghetto, a modern poor farm for the rejects of society and of the economy.”

  Very few politicians or their staff members actually read Harrington’s book, but many, including JFK, absorbed Dwight Macdonald’s long and engaging review, which appeared in the New Yorker in January 1963. It shocked the liberal conscience to learn that even by the government’s tight definition thirty-four million Americans—more than one out of six—lived beneath the poverty line and that three-quarters of these individuals were children and senior citizens. Many influential policy makers, including the president, also read a series of gut-wrenching articles in the New York Times by the veteran reporter Homer Bigart, who chronicled widespread destitution and social wreckage in West Virginia. Others were equally stirred by Night Comes to the Cumberlands, Harry Caudill’s stark volume on economic want in the Appalachians.

  Though critics would later identify the Great Society’s antipoverty programs as a handout to black residents of urban ghettos, in the early 1960s policy makers and journalists still tended to associate the poverty with white families in areas of the Appalachians and Midwest that had been stripped clean of coal, or where automation had rendered human labor obsolete. Such families, whether in desolate hamlets in Kentucky or squalid white ghettos in Cincinnati or Detroit, were the public face of American poverty.

  Perceiving an opening, Heller secured JFK’s consent to develop the seeds of a comprehensive program. Beginning in May, he and Lampman convened brown-bag lunches each Saturday afternoon at the Executive Office Building, where a revolving-door cast from the CEA, the Budget Bureau, and the Departments of Health, Education, and Welfare (HEW), Agriculture, Labor, and Justice ruminated on the problem and how to eradicate it. Many of those present were former university professors, and their meetings often resembled graduate seminars. “Some people would say poverty obviously means lack of money income,” Lampman told an interviewer. “That had the great merit of being something we had some numbers on. . . . But other people said that’s really not what poverty means. . . . It’s a spiritual concept; or it’s a participation-in-government concept; or it’s a lack of some kind of self-esteem, sort of a psychological or image problem that people had. . . . Still others would say it really has to do with lack of opportunity. It has to do with lack of public facilities like schools and so on. That’s what makes people really poor.”

  While some participants, particularly those representing the Department of Labor, argued that what poor people needed most was income, which the government could provide in the form of public-sector relief jobs as it did during the Great Depression, the thrust of the conversation that summer identified such broad-based themes as a “culture of poverty” and lack of “opportunity” as the primary causes and drivers of poverty. Such thinking made sense in the context of 1963. The economy, after all, was robust, and most Americans enjoyed the material, if not the spiritual and cultural, benefits of prosperity. Those who had been left behind only needed to be equipped with the means to claim their fair share of an ever-growing pie. This outlook provided a ready answer to those citizens who sought reassurance that government would maintain “full employment at a time when automation is replacing men,” as Kennedy framed the issue. The year he was elected, Gallup found that more Americans feared being displaced by machines than feared the Soviet Union. Opportunity theory promised every worker a chance to retrain and retool in a fast-changing workforce.

  Soon after Lampman returned to Wisconsin in late summer, the brown-bag lunches evolved into an informal interagency task force. Heller and Kermit Gordon, the director of the Bureau of the Budget, solicited feedback from cabinet departments on three broad topics: how to prevent people from slipping into poverty, how to pull them out of it, and how to improve the lives of those living in its grip. The resulting input was “a lot of junk,” recalled one of Heller’s aides, “warmed-over revisions of proposals that had been around for a long time, coming up out of the bureaucracy, programs that had been already rejected by the Congress.” Only when Heller sat down with David Hackett and Richard Boone did he hear something truly imaginative.

  Hackett had been a close friend of Bobby Kennedy’s during their days at Milton Academy, an elite boarding school in Massachusetts. As an Irish Catholic at what was “basically an Anglo-Saxon, WASP school,” and as a relative latecomer who transferred in during his junior year, Kennedy was something of a “misfit,” according to Hackett, who was the academy’s golden boy: fair-haired and handsome, a stellar athlete, and armed with a natural charm, he was later the model for the character Phineas in the novelist John Knowles’s adolescent classic, A Separate Peace. Though in his post-school days Hackett led an undistinguished professional life, Bobby involved him in his brother’s 1960 presidential campaign and then hired him at the Justice Department, where he served as staff director for the President’s Committee on Juvenile Delinquency (PCJD). Widely disparaged as a lightweight by the Ivy-educated lawyers who formed the attorney general’s inner circle, Hackett worked hard to immerse himself in academic and field literature on juvenile crime. Though some Kennedy intimates joked that the former college hockey champion had been hit in the head with too many pucks, an economis
t at the CEA remembered him as “a very hard-driving, effective, caring person” who managed in a short space of time to master his adopted field.

  Hackett spoke with representatives of the Ford Foundation, which for several years had been funding an initiative it called the Gray Areas project—a series of demonstration trials in such cities as New Haven and New York, where local organizations worked to improve conditions and create opportunities in the urban ghetto. Through the Ford Foundation, Hackett met Lloyd Ohlin and Richard Cloward, professors at Columbia University’s School of Social Work who spearheaded Mobilization for Youth, a Gray Areas project centered on Manhattan’s Lower East Side. In 1960, Ohlin and Cloward co-authored an academic book, Delinquency and Opportunity, which argued that juvenile delinquency was not an expression of personal pathology but rather a natural and rational response to the absence of opportunity in distressed urban areas. “The good society,” Ohlin argued, “is one in which access to opportunities and the organization of facilities and resources are so designed as to maximize each individual’s chance to grow and achieve his greatest potential for constructive contribution to the cultural life of the social order.” He convinced Hackett that the key to reducing juvenile delinquency was to build local community organizations that operated from the ground up, identify gaps in social services and infrastructure in consultation with local residents, and provide tools that would create opportunity. Ohlin was working as a consultant to the PCJD in 1963 when he and Richard Boone, a veteran of the Ford Foundation who was then Hackett’s deputy, received a summons to brief Walter Heller on the committee’s “community action” projects. Their meeting was intended to last thirty minutes but instead ran over two hours.

 

‹ Prev