BOWLING ALONE

Home > Other > BOWLING ALONE > Page 32
BOWLING ALONE Page 32

by Robert D. Putnam


  In the first half of the twentieth century older people had been much more likely to commit suicide than younger people, presumably because of the accumulation of frustrations and physical frailties over the course of the life

  Figure 73: Age-Related Differences in Suicide Rates, 1950–1995

  cycle. During the second half of the twentieth century, however, suicide became less and less common among older people and more and more common among younger people. In fact, this is precisely the pattern we might have predicted as the well-integrated long civic generation aged (reducing the traditionally high rates among old people) at the same time that the less well-integrated boomers and X’ers entered the population (raising the traditionally low rates among young people). As the twentieth century ended, Americans born and raised in the 1920s and 1930s were about half as likely to commit suicide as people that age had been at midcentury, whereas Americans born and raised in the 1970s and 1980s were three or four times more likely to commit suicide as people that age had been at midcentury. Whether or not generational differences in social capital fully account for figure 73, the figure surely shows that the life experiences of people who came of age after 1950 were very different from those of people who came of age before 1950.

  In fact, a broadly similar trend toward youthful suicide at a time when suicide rates for the rest of the population were falling has been found in many Western countries. Since clinical depression is a prime risk factor for suicide, the rise in observed suicide rates for young people is sadly consistent with the generationally based increase in depression. As the leading researchers in the field summarize hundreds of studies in dozens of advanced countries:

  It is striking that the rise in psychosocial disorders over the last 50 years is a phenomenon that applies to adolescents and young adults and not to older people. The explanation, therefore, has to lie in social, psychological or biological changes that impinge on younger age groups.32

  Suicide is a powerful but (fortunately) rare symptom of psychic distress. Less dire, more pervasive symptoms are tapped by the annual DDB Needham Life Style surveys on headaches, indigestion, and sleeplessness—what we term “malaise.” As figure 74 shows, in the mid-1970s the frequency of these symptoms did not differ significantly by age. On average, people in their sixties and seventies were neither more nor less likely than their children or grandchildren to be plagued by upset stomachs, migraines, and sleepless nights. Over the ensuing two decades, however, despite short-term fluctuation, among older people these symptoms of malaise tended to fade, while middle-aged and (especially) younger people became more and more afflicted. Between 1975–76 and 1998–99 the fraction of adults under thirty who ranked high on symptoms of malaise jumped from 31 percent to 45 percent, while the comparable index of suffering for adults sixty and over slipped from 33 percent to 30 percent. Slightly more than half of this growing gap can be attributed to the added financial worries that young people have encountered over the last quarter century, but that still leaves a substantial increase in youthful malaise unexplained, for even among the financially comfortable the generation gap in malaise widened steadily.33

  Over these same years (net of life cycle effects) general contentment with life declined among people under fifty-five, while increasing modestly among people over that age. Surveys in the 1940s and 1950s had found that younger people were happier than older people. By 1975 age and happiness were essentially uncorrelated. By 1999, however, younger people were unhappier than older people.34 The bottom line: a widening generation gap in malaise and unhappiness. The trends represented in figure 73 and figure 74 are, sadly, perfectly consistent: The younger you are, the worse things have gotten over the last decades of the twentieth century in terms of headaches, indigestion, sleeplessness, as well as general satisfaction with life and even likelihood of taking your own life.

  At midcentury young Americans (those we would come to label as the long civic generation) were happier and better adjusted than other people— less likely to take their own lives, for example. At century’s end that same generation (now in retirement) remains distinctively well-adjusted psychologically and physiologically. On the other hand, at century’s end the children and grandchildren of the long civics (those we label boomers and X’ers) are much more distressed and more likely to take their own lives than their grandparents had been at their age.

  Figure 74: Growing Generation Gap in Malaise (Headaches, Insomnia, Indigestion)

  As yet, this remarkable, well-established, and disturbing trend toward suicide, depression, and malaise among America’s younger generations has no widely accepted interpretation. One plausible explanation, however, is social isolation. Educational sociologists Barbara Schneider and David Stevenson recently reported that “the average American teenager typically spends approximately three and a half hours alone each day…. Adolescents spend more time alone than with family or friends.” Compared with teenagers studied in the 1950s, young people in the 1990s reported fewer, weaker, and more fluid friendships. Similarly, Martin Seligman points out that the depression epidemic has spared the close-knit old order Amish community, even though careful studies show that the rate of other mental diseases is no different in that community from that in the wider American society. He traces the growth of depression among younger Americans to “rampant individualism,” coupled with “events that have weakened our commitment to the larger, traditional institutions of our society.”

  Individualism need not lead to depression as long as we can fall back on large institutions—religion, country, family. When you fail to reach some of your personal goals, as we all must, you can turn to these larger institutions for hope…. But in a self standing alone without the buffer of larger beliefs, helplessness and failure can all too easily become hopelessness and despair.35

  Our evidence shows that this trend encompasses not merely the ultimate trauma of suicide, but also chronic symptoms of milder distress.

  Social isolation is a well-established risk factor for serious depression. In part, depression causes isolation (partly because depressed people choose isolation and partly because depressed people are not pleasant to be around). However, there is also reason to believe that isolation causes depression.36 Though all the evidence is not in, it is hard to believe that the generational decline in social connectedness and the concomitant generational increase in suicide, depression, and malaise are unrelated.

  Against this bleak picture of social isolation and civic disengagement among recent generations must be set one important countervailing fact: Without any doubt the last ten years have seen a substantial increase in volunteering and community service by young people. The annual survey of entering college freshman for 1998 reported that a record proportion of students volunteered during their last year of high school—74 percent, compared with a low of 62 percent in 1989. Volunteerism on a regular basis also is up, with 42 percent of freshmen donating their time for at least one hour a week, compared with 27 percent in 1987. This upturn in volunteering by high school students in the 1990s is also confirmed in the annual Michigan Monitoring the Future surveys, as well as the DDB Needham Life Style surveys.

  Why this welcome and encouraging increase in volunteering has occurred is not yet clear. In part it may simply reflect stronger public encouragement (including, in some cases, graduation requirements) for community service. If this youthful volunteering is driven only by official pressure, without the undergirding of a broader civic infrastructure of community organizations, both religious and secular, then one cannot be optimistic that the increase will prove durable. On the other hand, a more optimistic interpretation would be that the forty-year trend toward generational disengagement is at last bottoming out.

  GENERATIONAL SUCCESSION is, in sum, a crucial element in our story. However, it has not contributed equally powerfully to all forms of civic and social disengagement. The declines in church attendance, voting, political interest, campaign activities, associational membership,
and social trust are attributable almost entirely to generational succession. In these cases, social change is driven largely by differences from one generation to another, not by changing habits of individuals. By contrast, the declines in various forms of schmoozing, such as card playing and entertaining at home, are attributable mostly to society-wide changes, as people of all ages and generations tended to shift away from these activities. The declines in club meetings, in dining with family and friends, and in neighboring, bowling, picnicking, visiting with friends, and sending greeting cards are attributable to a complex combination of both society-wide change and generational replacement.

  In other words, one set of forces has affected Americans of all ages over the last several decades. These society-wide forces have been especially detrimental to private socializing, such as playing cards and entertaining at home. The consequent declines have been moderately strong and visible in the short run, since the behavior of individuals of virtually all generations has been affected. The allure of electronic entertainment is a likely explanation for these trends, as it has transformed the way all of us spend our time.

  A second set of forces has produced substantial differences across different generations, while not changing individuals. These generational forces have especially affected public engagement, such as religious observance, trust, voting, following the news, and volunteering. Because these forces have operated through generational succession, their effects have been more gradual and less immediately visible. Nevertheless, Americans born in the first half of the twentieth century have been persistently more likely to vote, to go to church, to volunteer, to keep up with public affairs, and to trust other people than Americans born in the second half of the century.

  Some activities have been buffeted by both the society-wide effects on private socializing and the generational effects on public norms. Club meetings, family dining, and local organizational leadership are excellent examples of this type of change. Because such activities have been affected by both short-run and long-run changes, they have evidenced some of the most dramatic changes of all, such as the 60 percent fall in club meetings, the 53 percent fall in service as officer or committee member of a local group, and the 60 percent increase in families that customarily dine apart.

  Since the link between generational change and declining civic engagement varies from domain to domain, it is somewhat misleading to form a single summary of the role of generational change in accounting for the declines surveyed in section II of this book. Nevertheless, as a rough summary it seems fair to say that about half of the overall decline in social capital and civic engagement can be traced to generational change.37 However, to say that civic disengagement in contemporary America is in large measure generational merely reformulates our central puzzle. The roots of our lonely bowling probably date to the 1940s and 1950s, rather than to the 1960s, 1970s, and 1980s, but what force could have affected Americans who came of age after World War II so differently from their parents and even from their older brothers and sisters?

  • • •

  A NUMBER OF SUPERFICIALLY PLAUSIBLE CANDIDATES fail to fit the timing required by this new formulation of our mystery. Family instability, for example, seems to have an ironclad alibi for what we have now identified as the critical period, for the generational decline in civic engagement began with the children of the maritally stable 1940s and 1950s. The divorce rate in America actually fell after 1945, and the sharpest jump in the divorce rate did not occur until the 1970s, long after the cohorts who show the sharpest declines in civic engagement and social trust had left home. Similarly, working mothers are exonerated by this respecification of our problem, for the plunge in civicness among children of the 1940s, 1950s, and 1960s happened while mom was still at home. Neither economic adversity nor affluence nor government policies can easily be tied to the generational decline in civic engagement, since the slump seems to have affected in equal measure those who came of age in the placid fifties, the booming sixties, the busted seventies, and the go-go eighties.

  Several other factors fit the evidence better. First, the generational reformulation of our central mystery raises the possibility that the wartime Zeitgeist of national unity and patriotism that culminated in 1945 reinforced civic-mindedness. It is a commonplace of sociology that external conflict increases internal cohesion. As sociological pioneer William Graham Sumner wrote in 1906:

  A differentiation arises between ourselves, the we-group, or in-group, and everybody else, or the others-groups, out-groups…. The relation of comradeship and peace in the we-group and that of hostility and war towards others-groups are correlative to each other. The exigencies of war with outsiders are what make peace inside…. Loyalty to the group, sacrifice for it, hatred and contempt for outsiders, brotherhood within, warlikeness without—all grow together, common products of the same situation.

  We noted in chapter 3 that membership in civic associations has spurted after both major wars in the twentieth century, and political scientist Theda Skocpol has extended this argument to the whole of American history. In chapter 5 we observed that union membership has historically grown rapidly during and immediately after major wars. Historians Susan Ellis and Katherine Noyes emphasize that to understand the origins of American volunteering, one must consider the history of American involvement in wars. “Volunteers are frequently active in the movements that lead to war, in the support of efforts to win war, in the protest against war, and in rebuilding society after war.”38

  During the Civil War women in the North formed Ladies’ Aid Societies to make bandages, clothing, and tents for soldiers, and eventually a group of Ladies’ Aid Societies banded together to form the U.S. Sanitary Commission, which became the largest relief organization during and after the war. Drawing on her experience as a battlefield nurse with the Sanitary Commission, Clara Barton formed the American Red Cross in 1881. The war also gave a powerful boost to fraternal associations appealing to the spirit of camaraderie and mutual sacrifice fostered by shared wartime adversity. Five of what would become the largest associations of the late nineteenth century and early twentieth century—the Knights of Pythias, the Grange, the Benevolent and Protective Order of Elks, the Ancient Order of United Workmen, and the Grand Army of the Republic—were founded between 1864 and 1868. A similar, if less pronounced, spurt in voluntary activity in civil society was associated with World War I.39

  The most relevant example, however, is the extraordinary burst of civic activity that (as we saw repeatedly in section II) occurred during and after the Second World War. Virtually every major association whose membership history we examined—from the PTA, the League of Women Voters, and the American Society of Mechanical Engineers to the Lions Club, the American Dental Association, and the Boy Scouts—sharply expanded its “market share” between the mid-1940s and the mid-1960s. As we observed, there were similar postwar spurts in other community activities from league bowling and card playing to churchgoing and United Way giving.

  World War II, like earlier major wars in U.S. history, brought shared adversity and a shared enemy.40 The war ushered in a period of intense patriotism nationally and civic activism locally. It directly touched nearly everyone in the country. Sixteen million men and women served in the armed forces, including six million volunteers. They and their immediate families made up at least one-quarter of the population. Of men born in the 1920s (the cohort that would prove to be the core of the “long civic generation”), nearly 80 percent served in the military.41 In millions of front windows hung blue stars, emblem-atic of a son or husband in the armed forces, and a dismaying number of gold stars, signifying a lost loved one. And the agonizing task of deciding which young men would be sent off to war lay in the hands not of a distant federal bureaucracy, but of thousands of lay draft boards across the country.

  Patriotic themes, including civilian service—civil defense, rationing, scrap drives, War Bond sales—pervaded popular culture, from radio shows to the comics se
ction of newspapers, from Hollywood to Broadway to Tin Pan Alley. Historian Richard Lingeman reported, “American flags were displayed everywhere—in front of homes, public buildings, fraternal lodges. Elks, Lions, Kiwanis, Rotary, even trailer camps, gas stations, and motor courts had them.” The war reinforced solidarity even among strangers: “You just felt that the stranger sitting next to you in a restaurant, or someplace, felt the same way you did about the basic issues.”42

  The government sought whenever possible to use voluntary cooperation and resorted to controls in piecemeal fashion—not least out of careful political calculation. Wrote one Democratic Party operative, opposing gas rationing before the 1942 congressional elections, “An appeal by the President for voluntary cooperation will get patriotic support … and will be politically safer.”43

  Treasury secretary Henry Morgenthau pressed for a massive advertising campaign to sell War Bonds in the hope that bond campaigns would “make the country war-minded.” Batman flogged war bonds from the cover of his comic book, Betty Grable auctioned off a pair of nylons for $40,000, and Marlene Dietrich toured sixteen Ohio towns in a Jeep. It worked: twenty-five million workers signed up for payroll savings plans, and in 1944 E-bond sales absorbed 7.1 percent of after-tax personal income.44

  Superstar crooner Bing Crosby was enlisted to rally support for scrap drives:

 

‹ Prev