Urban Injustice: How Ghettos Happen

Home > Other > Urban Injustice: How Ghettos Happen > Page 8
Urban Injustice: How Ghettos Happen Page 8

by David Hilfiker


  WHAT IS “POVERTY”?

  We talk glibly of poverty without defining our terms, but definitions are important. In this book what I mean by poverty is having an income below the federally determined poverty level. This is the official definition and the one most commonly used in the United States. It is important to be aware, however, that this official poverty level severely understates the actual number of people who live in what most Americans would intuitively consider poverty.

  The “official poverty level” first seeped into government parlance in 1961, when Mollie Orshansky, a staff analyst at the Social Security Administration, needed an objective definition for statistical work she was doing. She reasoned that the financial inability to purchase an adequate diet would be generally considered poverty. In the 1950s, the United States Department of Agriculture (USDA) estimated that the average American family spent about a third of its income on food. Every year the USDA also estimated the cost of a minimally adequate diet. Orshansky, therefore, defined the poverty level as the cost of a minimally adequate diet multiplied by three. That definition stuck, and without real evaluation became the official government standard, which is revised annually, using updated USDA estimates of food costs. Although levels are calculated for various family sizes, when used by itself the term “poverty level” usually refers to the amount a family of four would need to stay out of poverty, which in 2001 was $17,650.

  Unfortunately, Orshansky’s definition is too simplistic for the weight it has had to bear over the last forty years. There are numerous problems. First, the poverty level is held to be the same throughout the continental United States, although the cost of living varies enormously. Someone living on a farm in South Carolina needs less money to live than a person living in the inner city of New York.

  Second, non-cash income like food stamps and housing subsidies was only minimally available in 1961 and is, by definition, excluded from the calculations. A family with an income just below the poverty line who receives food stamps and a housing voucher is clearly better off than another family with an income just over the poverty line who receives neither of these benefits, but the former is considered poor and the latter is not.

  Third, taxes are not taken into account, so neither the expense of taxes or the income of the Earned Income Tax Credit changes one’s “income” for purposes of the calculation.

  But by far the biggest problem with the poverty level is that it is obsolete. Relative costs of different expenses have changed significantly in the past fifty years. Utility costs have risen faster than the cost of food, as have housing costs. A one-bedroom apartment in the Washington, D.C., area (at the government fair market rent of $716) would be 61 percent of the poverty level income for a family of three. If food still costs 33 percent of their budget, that leaves only 6 percent or $71 a month for all other expenses, including childcare and health care. Technology—washers, dryers, kitchen appliances, television, computers—now eats up a larger portion of expenditures. Probably the biggest single issue, however, is childcare. Because most women with children stayed at home in the 1950s, the cost of childcare, now significant for young families, is still not included in the calculation.

  Such changes mean that the average American family in 2001 spends closer to one-fifth of its income on food, so it would be reasonable to reset the poverty level by multiplying the least expensive food plan by five not three, but this would more than double the number of people we would consider poor.

  Unfortunately, the determination of the poverty level has deep political implications. Raising the poverty level to define more people as poor, for example, would boost the arguments of those who want to spend more to ameliorate poverty; lowering the level would support those who believe we are already doing enough. Statisticians both inside and outside the government have suggested a system that would be more consistent, that is, that would take into account government benefits, the cost of childcare, taxes, earned income credits, and so on. The political implications of redefinition, however, are so loaded that most proposals recommend standardizing any redefinition so that the new number of people considered poor would be equal to the number under the old system. There would be no attempt to reach some kind of consensus as to who should really be called poor.

  There can be no doubt, however, that for those who live in the cities, where costs are invariably higher than in rural areas, the official poverty level severely underestimates poverty.

  WHAT (AND WHY) IS “WELFARE”?

  The term “welfare” properly means any form of institutional or state assistance to people in need. Local relief payments, disability payments, medical assistance, cash aid to families, food stamps, housing vouchers, and assistance to the elderly are all examples of state-financed welfare. Welfare also includes health insurance and pensions offered by employers, and similar elements of what might be called “the private welfare state.” In the current political debate, however, the term “welfare” has popularly been limited to that form of federal/state public assistance given to single mothers and their families, previously known as Aid to Families with Dependent Children (AFDC). In 1996, under what is now called Welfare Reform, AFDC was dismantled and the money bundled in “block grants” and given over to the state governments for the administration of a new program, Temporary Assistance for Needy Families (TANF). Restricting the discussion of welfare strictly to “cash assistance to poor families,” however, tends to hide the extent of the patchwork American welfare state that does exist and to distort our understanding of the changes that have occurred over the last generation. Direct cash assistance to families through TANF, for instance, is not the only element of welfare successfully attacked and either eliminated or reduced during the last generation.

  We think of welfare’s purpose primarily as the alleviation of poverty, but both public and private welfare have other functions as well. Public assistance programs promote social order and discipline. Government can initiate or expand assistance programs in attempts to appease political protest or unrest, such as Congress’s mandated increases in AFDC benefits after the inner-city riots of the 1960s. Similarly, the state can restrict or withdraw programs in an effort to discipline the poor, for example the recent attempts by several states to control childbearing among the poor by refusing to offer TANF benefits for additional children born to women already on the program. Private welfare benefits have also sometimes been offered in specific response to worker unrest or union organizing efforts. Welfare, and the stigma attached to it, has also been used to frighten working people into accepting without protest low wages and difficult working conditions.1

  Welfare has perhaps most commonly been used as a mechanism for political mobilization. Particularly in local politics, public officials have frequently used public assistance as a reward for political support. Chicago’s Democratic political machine, for example, long wielded this form of patronage as part of its effort to maintain its power base. Ronald Reagan, on the other hand, used his opposition to welfare as a strategic part of his presidential campaigns in 1980 and 1984, reciting anecdotes of “welfare queens” fraudulently receiving multiple checks and driving new Cadillacs.

  Since 1960, welfare benefits have been used in the attempt to make up for past racial injustice. Although Lyndon Johnson’s War on Poverty began as a program directed at white Appalachian poverty, many of its resources were quickly diverted to fight black urban poverty. This was in part due to a fear of urban riots, but for many leaders was also a conscious effort to respond to the Civil Rights movement and represented a heightened consciousness of racial discrimination.

  Each of these purposes is still operative in the current debates over welfare.

  The debate about who “deserves” public assistance dates back at least five hundred years to the beginnings of modern welfare in Europe. Societies have always tried to separate those who suffer through no fault of their own from those who have apparently brought their difficulties upon themselves due
to substance abuse, laziness, unwillingness to work, promiscuity, or any other trait deemed undesirable at a given historical moment. The English 1531 Act for the Punishment of Sturdy Beggars, for instance, was among a number of early laws that denied charity to the able bodied. As a matter of policy, American society has generally tried to confine private charity and governmental assistance to the “deserving,” while insisting that the “undeserving poor” improve their character as a condition for receiving relief.

  The problems with this unending debate are several. It is, in practice, impossible to distinguish with any certainty the “deserving” from the “undeserving,” no matter how defined. If society tries to enforce such a separation through governmental rules and regulations, it quickly discovers that the causes of poverty are complex and sometimes subtle, and that decidedly difficult-to-determine psychological conditions heavily influence judgments of “deservingness.” A person who, on paper, looks lazy and unwilling to work, for example, may, on closer examination, be mentally or emotionally incapable of performing any useful work. It is almost impossible to make these distinctions accurately and consistently through formal regulations. But if society tries to separate the “deserving” from the “undeserving” through subjective personal interviews and one-on-one determinations, local prejudices weigh far too heavily for the overall process to be considered either just or accurate.

  In addition, framed this way, the debate over who is to be helped will largely ignore the structural causes of poverty examined in this book, while the very impossibility of separating the “deserving” from the “undeserving” will insure that any regulations and policies designed to weed out the latter make life unjustly miserable for the former. Those who ran nineteenth-century poorhouses, for instance, were afraid that “undeserving” people would overrun their institutions. In most cases, the institutions responded to this threat by making life in the poorhouses so miserable that no one would want to stay, were any other choice available. That probably succeeded in keeping out most of the “lazy” people (whatever their actual problems), but at the cost of brutally punishing those who had no other recourse. A current example of this attitude that punishes the needy for fear of making the program too attractive is the level of TANF benefits, which are so low that no one could survive on them. Although benefits differ from state to state, the average maximum payment for a family of three in 1999 was $394 per month or $4,728 per year, approximately one-third of the official poverty level. In Alabama, TANF payments to a family of three with no other income were $164 a month, less than one-sixth of the official poverty level.2

  As might be expected, the definition of who is “deserving” has changed over time. Not so long ago, for example, poor single mothers with young children were considered “deserving,” while we now consider most young welfare mothers “undeserving” of any ongoing assistance.

  OFF ON THE WRONG FOOT

  Those aghast at the low welfare payments and other elements of our contemporary tattered safety net are tempted to look back with nostalgia at the New Deal of President Franklin D. Roosevelt, a program rightly considered the beginning of the modern American public welfare state. In fact, however, the seeds of the current confusion over social welfare were sown during Roosevelt’s administration. We will only understand today’s poverty if we understand the history of social welfare, beginning with the New Deal.

  Aside from the veterans’ and widows’ pensions that ended before World War I, the federal government was rarely involved in welfare until the Great Depression of the 1930s and Roosevelt’s administration. Millions of middle-class families were suddenly thrown into poverty. “The poor” had become “us.” Political attitudes toward welfare changed almost overnight, and there was great demand for federal assistance to those suffering from poverty. Roosevelt quickly created the Federal Emergency Relief Administration, which distributed approximately $18 billion in direct relief from 1933 to 1936. His administration also created work for the unemployed. Beginning in 1933, the Civilian Conservation Corps often sent unskilled men aged eighteen to twenty-five to mostly rural work camps. At its peak it employed more than half a million men. Another program, the Works Progress Administration (WPA), recruited workers of all sorts, ranging from unskilled laborers who worked building highways to photographers sent out to document the devastation of the dustbowl. At its peak the WPA employed more than three million people. Despite the size of these programs, they served less than a quarter of those eligible for relief.

  Both of these programs were ended when World War II transformed the glut of workers into a shortage, but a number of programs initiated during the Depression became cornerstones of America’s “social insurance” system. Although these have proved powerful anti-poverty programs in their own right, they differed from public assistance in that they were not targeted specifically to the poor, but covered the whole population. As a start, a set of largely voluntary unemployment insurance programs that, prior to the Depression, differed from state to state, was transformed under the 1935 Economic Security Act into a mixed federal-state unemployment insurance program in which the federal government mandated uniform standards that the states were primarily responsible for enforcing. Employers were required by state law to pay premiums for certain levels of unemployment insurance that would then provide a cushion for people who lost their jobs.

  The most important of Roosevelt’s innovations in social insurance, however, was probably the Social Security program, also a part of the Economic Security Act of 1935, which provided benefits not only for the elderly, but also for the disabled. Although the program initially excluded agricultural workers and domestics (and therefore most African Americans), it has since been significantly expanded. Sold to the public as a pay-as-you go insurance program, Social Security has nevertheless also always been a welfare program, that is, a wealth transfer in which payments by younger, working individuals provided benefits for the retired and disabled. Its nature as a welfare program becomes clear if one considers that most beneficiaries have, up until now, received approximately twice what they would have received if their payments had simply been invested in United States Treasury bonds, and that—compared to their contributions—poorer individuals receive proportionately more than do wealthier.

  Perhaps the greatest indication that Social Security is actually a massive welfare program is the amount of money in the trust. Any true insurance program should be able to stop taking in new business today and still have enough money to meet all of its future obligations. This has never remotely been the case with Social Security, which, at the end of 2000, had $931 billion in its trust fund and liabilities of $9.6 trillion (or $9,600 billion). Social Security has always been a transfer of income from working people to certain people who were not working, not a true insurance program.

  The program was significantly strengthened with markedly increased benefits during the 1960s. Nothing indicates its enormous success more clearly than the 1997 poverty rate for the elderly—just over 10 percent. That same year, the poverty rate for children, who have no such program, was twice as high. It is estimated that in the absence of Social Security payments, 50 percent of today’s elderly would be poor.

  In the Economic Security Act of 1935 there was also a matching grant program that encouraged states to assist the elderly who had not worked long enough to collect benefits under Social Security. Because the program was administered by the states, its benefits varied greatly from state to state and were usually not sufficient to live on. Nevertheless, such limited old-age assistance, which did not discriminate as much against African Americans, was, for the most part, what most Americans thought of as “welfare” until the mid-1950s.

  The New Deal cemented into place the ultimately untenable distinction between “social insurance” and “public assistance” that has, in the end, prevented the United States from developing a more comprehensive program of economic security similar to those in Canada and the countries of Western Europ
e. In the United States, we consider programs like Social Security, Medicare, disability pensions, and disaster relief to be social insurance. All of us pay in and, in times of trouble, any one of us can take out. Usually, there is no stigma attached to taking help from a social insurance program; we think that that’s what it’s there for and that we should take it if we need it. Yet we consider payments to families with young children, food stamps, general relief, and Medicaid to be “public assistance,” akin to charity, undeserved handouts given by a generous “us” to a handicapped or malingering “them.” Stigma is built in: public assistance programs seem to us to be only for those who just do not have what it takes to succeed. By calling a program public assistance, we assume the likelihood that someone undeserving of help will try to cheat. In fact, social insurance and public assistance are both forms of wealth transfer. Resources are taken from certain groups of people (those who are working) and provided to other groups (largely those who are not).3

  It is no coincidence that social insurance programs are administered by the federal government using nationally uniform standards and benefits pegged to inflation. Public assistance programs, on the other hand, tend to be administered by state or local governments, with standards that vary from place to place, while cost-of-living raises for public assistance programs generally depend upon the uncertainties of local legislative whims and processes. Since few state or local governments permit “deficit spending,” in times of recession, when the need for public assistance is highest, local and state tax coffers dwindle. As a consequence, federally administered “social insurance” programs have substantially better benefits than “public assistance.” Compare the average $394 TANF payment for a family of three to the usual $515 payment for a single disabled person covered by the federal SSI program. As another example, in the twenty years before the Welfare Reform Act of 1996, AFDC benefits (adjusted for inflation) declined by 40 percent, while Social Security benefits remained stable.

 

‹ Prev