In the last decades of the century, social insurance was expanding its benefits while welfare was contracting. The system of social benevolence was tilted less toward bringing those born into poverty out of it, more toward sheltering the rest of us from insecurity.
CHAPTER 14
THE LIMITS OF BENEVOLENCE
1
In America the principle of compassion toward the poor is nonpartisan and nonsectarian, regularly proclaimed by preachers and politicians. The parable of the Benevolent Community informs endless rounds of exhortation and self-congratulation. Walter Mondale and fellow Democrats tried to seize the rhetorical high ground of righteousness in 1984, declaring “we’re fair, we’re decent, we’re kind, and we’re caring. We insist that, as we care for ourselves, there are some in America who need our help. There’s a limit to what Americans will permit to happen in this good country of ours.”1 Ronald Reagan matched him, sentiment for sentiment, and scored extra points on style: “How can we love our country and not love our countrymen; and loving them, reach out a hand when they fall, heal them when they’re sick, and provide opportunity to make them self-sufficient so they will be equal in fact and not just in theory?”2
No nation talks more about the importance of charity toward the less fortunate. No people organizes more concerts, bake sales, telethons, walkathons, and national hand-holdings to raise money for the hungry and homeless. None takes as seriously the problem of poverty or the ideal of equal opportunity. But few Western industrialized nations fail as miserably to bridge the gap between their richest and poorest citizens.3 The irony is not incidental. During the last quarter century a consistent majority of Americans has believed that the income gap between the nation’s rich and poor is too wide and should be narrowed.4 But an equally consistent majority has been deeply suspicious of the basic tenets of welfare.5
“Welfare” was a dirty word for a long time before Ronald Reagan entered the White House. Conservatives had long assailed the welfare system for corroding the work ethic and retarding capital accumulation. They were gradually joined by liberals disenchanted with a system they saw as stigmatizing the poor. Both sides worried that welfare induced permanent dependency. Jimmy Carter campaigned against the welfare system’s inefficiencies. State and federal welfare agencies symbolically transformed themselves into departments of “human services,” disdaining the dreaded word “welfare.” The stories that Americans began telling one another were of a welfare system run amok, draining the paychecks of working citizens, perverting the people it was meant to help, and ultimately harming the nation.
Compassion and generosity are still sentiments that Americans endorse and act on when it’s a matter of concerts, bake sales, and other such voluntary activities. But when it comes to government welfare programs, the consensus has dissolved. It is widely accepted that welfare does not work, but there is no alternative vision of public action that might. The Benevolent Community is bereft of any guiding philosophy for demarcating public and private responsibilities. As private individuals, we understand our obligations toward the poor; as citizens, we are frequently baffled, disappointed, and suspicious.
2
There has been confusion, first, about the definition of the community within which benevolence should take root. Franklin D. Roosevelt’s boldest innovation had been designating the nation as a community. At a time when the whole nation was stricken, and only a massive common campaign could hope to prevail over depression and fascism, this designation was compelling to the American people. But it was not until the mid-1960s, when Lyndon Johnson declared war on poverty, that Roosevelt’s notion of a national community became linked to welfare. The marriage was not easy and never gained widespread political support. Public generosity could readily be mustered for the needy living nearby, in the same town or even the same state. These poor could be seen and heard; their plight was palpable. But the poor of another state or region—many of them black, filling central cities thousands of miles away—had less of a purchase on the citizens’ sympathy. Generosity is a powerful sentiment, but its strength drops as distance and differentness from its object increase. And as guilt was pushed more prominently as a motive for aiding the poor, especially the black poor, welfare spending felt more like grim duty or even compulsion than like the embodiment of generosity. As a new wave of immigrants from Latin America and the Caribbean swarmed into the nation’s cities in the 1970s and early 1980s, the ideal of national community seemed an even feebler motive. To many Americans, there seemed no principled difference between poor Hispanics—some of them illegal aliens—living in Los Angeles and poor Hispanics living in Mexico City. What were the purposes and proper limits of benevolence? The idea of national community offered no guidance.
By the 1980s, accordingly, a not insubstantial portion of the American public was ready to hear a new story about the Benevolent Community, one that defined benevolence as voluntary charity and defined community as the local neighborhood rather than the nation. It was to be a “renaissance of the American community, a rebirth of neighborhood,” according to Reagan. The Republican platform for 1984 emphasized the new definition and encased it within a narrative that explained what had gone awry: “By centralizing responsibility for social programs in Washington, liberal experimenters destroyed the sense of community that sustains local institutions.” It was necessary to put responsibility back where it belonged, in the neighborhood, where natural bonds of friendship and shared aspirations would nourish and guide generosity. By the end of his first term, Reagan could point to several of his initiatives that were premised on this idea—the “New Federalism,” the block grant components of the 1981 tax and spending package, the private-sector initiatives task force, and, of course, his hostility to forced busing. Reagan declared that the new “emphasis on voluntarism, the mobilization of private groupings to deal with our social ills, is designed to foster … our sense of communal values.” Reagan explicitly repudiated Roosevelt’s vision: America was a nation of local communities, not a national community.6
This vision was emotionally appealing, mythically resonant, and profoundly out of sync with changing American reality. By the 1980s rather more Americans lived on military bases than lived in what could be called “neighborhoods” in the traditional sense (card games on the front porch, kids running over lawns and fields, corner soda fountains, town meetings, PTAs, and the friendly, familiar policemen and postmen). The majority lived in suburban subdivisions that extended helter-skelter in every direction, bordered by highways and punctuated by large shopping malls; or they lived in condominiums, townhouses, cooperative apartments, and retirement communities that promised privacy and safety in the better urban enclaves; or they inhabited dilapidated houses and apartments in the far less fashionable areas. Many worked at some distance from their homes and socialized with friends selected on some other basis than proximity. The people who happened to inhabit the geographic area immediately surrounding their homes had no special claim on their allegiances or affections. The average family, moreover, moved every five years or so.7 These ersatz neighborhoods contained no shared history, no pattern of long-term association.
Even if they had the time or inclination to get to know their neighbors, most Americans would meet people who shared the same standard of living as they. If they were very poor, their neighborhood was likely to be populated by other very poor people; if very rich, by others who enjoyed the good things in life; if young and professional, then by other well-heeled, upwardly-mobiles. In sum, by the 1980s the meaning of neighborhood had changed. What had once been small towns or ethnic sections within larger cities had given way to economic enclaves whose members had little in common with one another but their average incomes.
The idea of neighborhood benevolence—of neighbors looking after one another—had little practical meaning in this new context. The sentiment remained attractive partly because of its nostalgic and romantic qualities, like a stroll down Main Street in Disneyland. But it
was also attractive for a more insidious reason. The idea of community as neighborhood offered a way of enjoying the sentiment of benevolence without the burden of acting on it. Since responsibility ended at the borders of one’s neighborhood, and most Americans could rest assured that their neighbors were not in dire straits, the apparent requirements of charity could be exhausted at small cost. If the inhabitants of another neighborhood needed help, they should look to one another; let them solve their problems, and we’ll solve our own. The poor, meanwhile, clustered in their own, isolated neighborhoods. By the late 1980s many of America’s older cities—forced to take ever more financial responsibility for the health, education, and welfare of their poor inhabitants—were becoming small islands of destitution within larger seas of suburban well-being.8
3
There were other confusions concerning our collective obligation to the poor. Most Americans subscribed to the ideal of equal opportunity. But what did this mean? Surely it included the notion that no citizens should endure legally enforced discrimination against their race or religion or national origin, in the form of segregated schools and public transportation. Many would extend the proscription to private discriminations by employers and sellers. But it was something else again to enforce equality of opportunity by imposing what many Americans saw as penalties against themselves: forcing them to bus their children to distant and more dangerous parts of town, or giving jobs and promotions to minorities ahead of other worthy candidates.9
The revolt of blue-collar and middle-class Americans against these liberal policies is generally explained by simple self-interest; suddenly, they were bearing the burden of providing the poor with equal opportunity. But to attribute all of the resistance to selfish motives misses an important part of the tale. These policies were neither explained nor justified by reference to any broader principles that fit them into a philosophy of social obligation. There was no convincing story to explain why this burden should fall so heavily upon the shoulders of working-class Americans.
A third confusion concerned the objects of benevolence. Who was needy? Conservatives argued that the only people who should be eligible for public assistance were those unable to care for themselves. Persons capable of working but who chose not to, or who worked but collected low incomes, did not deserve help. This argument, however, offered incomplete guidance. Even a mentally retarded teenage mother might be considered capable of living without public assistance if she shared a home with relatives, left her child with them or with friends each day, and traveled several hours to a menial job in a distant part of the city. Through most of human history, most people somehow managed to take care of themselves without public assistance, including many who lived with handicaps that most Americans would consider wholly debilitating. Thus the criterion of potential self-sufficiency remained thoroughly ambiguous. What should America expect of its poor and disadvantaged? What were the appropriate limits of public benevolence?
4
Before the mid-1960s America was inhabited by many poor people, but not by “the poor.” Those who suffered material deprivation were not sharply differentiated from the rest of us because poverty was relative, mostly a matter of degree. With one quarter of the work force unemployed during the Depression decade of the 1930s, poverty was demonstrably not a condition that some morally flawed subgroup was solely subject to or responsible for. The subsequent wars, recessions, and demographic bulge of young postwar families with limited incomes confirmed America’s collective experience of struggling to make ends meet.
The nation began to perceive “the poor” as a separate group only once the majority of Americans achieved relative economic security. New stories began to be told: The public was shocked to discover entire cultures of poverty, as revealed, for example, in Michael Harrington’s tellingly titled book, The Other America,10 journalistic accounts of life in Appalachia, dramatizations of the urban poor, like Blackboard Jungle and West Side Story, and the early rhetoric of antipoverty policy, which linked the movement for black civil rights to the plight of the poor.
In these new stories, the poor were different from the rest of us. They lived in exotic and mysterious environments; they sang their own songs and danced their own dances; their skin was often a different color, and they spoke in strange languages or with odd accents. Many of these groups, we discovered, had been poor for generations as the rest of us grew prosperous. The poverty of neglected minorities became a national scandal.
As with all such moral campaigns in America, the issue easily became a constitutional one. Many blacks were poor, and the two sets of deprivations, one based on deep-seated prejudice and the other on economic exclusion, were so entwined that welfare rights and civil rights came to be seen as much the same thing. This merger of newly asserted rights was argued in the courts. It was institutionalized in the community action agencies and the antipoverty bureaucracy that grew up around them. And it was dramatized by the urban riots of the late 1960s. These developments, in turn, served to further accentuate the differences between the poor and the rest of us. Just as blacks were entitled to affirmative remedies for the racial discrimination to which they had been subject, it was urged that the poor were entitled to a certain minimal standard of living. Welfare benefits were cast as rights. These claims implied a corresponding duty of the majority of Americans who were neither poor nor black to pay for them. But even before this logic was traumatically extended to busing and affirmative action, the poor were indisputably “them.”
Government analysts, dutifully responding to the demands of federal poverty agencies for a criterion by which to dispense the benefits now due, came up with the elegant notion of a “poverty line.” This was the minimal amount of income that calculations revealed an American family would require to escape unacceptable want. Families whose total income fell below this theoretical line were in poverty; families above it were not.11 The line distinguishing “us” from “them” was now defined with an accountant’s apparent precision.12
The poverty-line definition of the needy, however corrosive of the ideal of mutual obligation, did serve to remind us of how many Americans were unable to support themselves. When the poverty index rose, we knew that the problem was getting worse. That explicit signal provoked comment and debate. A president who presided over a large increase in the poverty index had some explaining to do. But the price we paid for adhering to this symbolic definition, and the separation it implied between the poor and the rest of us, was significant.
Within American political discourse, the problem of poverty was now as neatly delineated as was the poor population itself. The boundary allowed the rest of us to distance ourselves emotionally. There was relief in the notion that the real poor were different from us. We could put aside discomfiting speculation that our comfort and their distress reflected luck as much as anything else. We could also escape the distress that comes in identifying oneself and family with others experiencing hardship. We may acknowledge a moral obligation to alleviate their lot, but we did not share their experience.
5
But in fact, most of the poor were not very different from the rest of us. Between 1975 and 1985 one out of three Americans fell below the poverty line at least once. Half of all who did remained there only one or two years. The poor did not shun work or live off welfare: Two thirds of the nonelderly poor lived in households where someone worked, and most of these families received no welfare payments at all. Two thirds of the very poor were white, living in rural or suburban areas of the country.13 This is not to suggest that the “culture of poverty” was illusory. About 20 percent of the poor remained permanently submerged below the poverty line, trapped within urban ghettos and pockets of rural poverty. The problem of permanent poverty was smaller than commonly understood, however, and the line separating “us” from “them” far less distinct.
Nevertheless, the boundary drawn around “the poor” in the stories we told one another had transformed our understanding, perp
etuating many of the problems we sought to solve. When the poor could be any one of us, public assistance was assumed to entail reciprocal benefits and responsibilities. Now it was a matter of charity, assigning no response and presuming no responsibilities on the part of recipients. This emphasized the re-distributive nature of the transaction—our magnanimity and their dependency. This perception tended to project itself on us and them alike, undermining whatever sense of reciprocity there might otherwise have been. Poverty programs were chronically subject to the charge that government was doing too much or too little for “them.” The programs seemed inadequate whenever large numbers of people were recorded as being under the poverty line. The programs seemed too generous whenever budget deficits or economic sluggishness seemed to require that government reduce its scope, or whenever “they” seemed undeserving of the benefits they received.
As the separateness of the poor became reinforced through racial, ethnic, or geographic isolation, a widening range of public services took on the character of welfare. As affluent urban dwellers deserted the public schools for private alternatives, public education came to be seen as a means of promoting equal opportunity rather than as an affirmation of our common culture. As middle-class Americans built vacation homes and joined private sports clubs, expenditures for public parks and playgrounds were justified by reference to the needs of poor children for outdoor recreation. As private hospitals—scrambling to cut costs—stopped subsidizing uninsured patients, support for public hospitals was cast as mostly a matter of aid to the poor.
By the 1980s the debate over welfare was essentially a series of variations on the question of how tough we should be on “them.” Ronald Reagan answered it in the way an increasing portion of the public thought it should be answered: very tough. In the name of concentrating resources on the “truly needy,” the administration proceeded to tighten eligibility requirements for food stamps, child nutrition, housing assistance, and Medicaid. It cut back on programs that dealt with anything beyond the bare necessities of life—programs like Head Start (preschool for poor four-year-olds), the Job Corps (work training for unemployed youngsters), vocational schooling, compensatory education, and the like.14
Tales of a New America Page 18