BOWLING ALONE

Home > Other > BOWLING ALONE > Page 52
BOWLING ALONE Page 52

by Robert D. Putnam


  Continuity: The reliability of our time-lapse sequence is also very dependent on the number of snapshots at our disposal. In assessing social change, two observations are better than one, but many is much better than two. Literally nothing at all can be said sensibly about change from a single photo or a single survey. Though this point seems obvious, otherwise intelligent folks sometimes claim to detect directions of social change from a single observation, which is just as silly as to make a claim about global warming from a single glance at the thermometer.4

  Data from two points in time offer some leverage for testing claims about change but are vulnerable to measurement inconsistency at either end. A single measurement error—a subtle change in question order, for example—might lead to a mistaken judgment about the overall trend. Or suppose that a survey of church attendance in 1964 was taken in the middle of August vacations and the same question in 1994 happened to be posed during Easter week. With only two points in time, we might well be misled into thinking that religious observance was booming in the 1990s. Just as it would be foolhardy for students of global warming to make much of a single pair of temperature readings two decades apart, so too in assessing social change, random fluctuations can invalidate judgments based on only a few data points.

  Change measured at multiple points in time becomes exponentially more reliable; if a given variable increases steadily from time 1 to time 2, from time 2 to time 3, and so on to time 10, it becomes virtually impossible to conceive of a series of measurement slipups that might have produced the trend. In short, for reliable assessment of social change, we need not merely comparable measurements, but comparable measurements repeated as many times as possible. For that reason, in this book I have relied most heavily on surveys that posed the same question dozens—even hundreds—of times over the last quarter of the twentieth century.

  Comprehensiveness: Just as in the case of membership rosters, our surveys must cover a wide range of activities. Even if a question is literally invariant, its accuracy as a measuring rod may change over time. We might consider a question about frequency of bowling as an indicator of informal social togetherness. However, if bowling were gradually replaced by softball or soccer as the leisure sport of choice among Americans, then an accurately reported decline in team bowling might simply have been offset by a rise in softball or soccer, both team sports.5 So we must cast our net as widely as possible.

  Timeliness: Since social change proceeds unevenly, measurement periods must be matched to hypotheses about the scale and timing of change. Our interest is not “social change” in the abstract. We want to know what, if anything, has happened to our communities over the last half century or so. Just as we could infer little about global warming by comparing yesterday to today, so too we can infer little about social change over the past several decades by examining evidence over the last few years—or over the last few centuries, for that matter. So we must always ask about any trend not just “What’s changing?” but “What’s changing over what period?” A fair test of our thesis requires comparable data over as much of the last three to five decades as possible.6

  The good news is that several national survey archives provide comparable, continuous, and comprehensive evidence on the contours of social change. The bad news is that with rare exceptions, these collections did not begin before the mid-1970s.7 There is reason to suspect that some important shifts in American community life began in the mid-1960s, but few of our cameras began operating until about a decade later. We cannot be sure what was happening before the shutter on our social time-lapse camera was first triggered, but the survey archives probably missed some of the most interesting action. This deficiency is one important reason for taking advantage of organizational records. It is also a reason for paying special attention to those few surveys that span the earlier period, such as the University of Michigan–NIMH study cited in chapters 3, 4, and 6.

  One last issue of methodology: Should we measure absolute or relative change, and if relative, relative to what? Should we consider the absolute number of participants or contributions to some community purpose, or should we instead use some relative standard of comparison? Organizations and headline writers often boast of growing participation in absolute terms—”the XYZ Club has a record number of members this year!” “Record number of Angeleños go to the polls!” “Local church giving hits all-time high!” But absolute numbers can be badly misleading.

  If the total vote rises by 5 percent, while voting-age population is rising by 10 percent, participation has actually fallen. Conversely, if membership in the Grange falls by 5 percent, while the number of farmers is falling by 50 percent, the involvement of the average farmer has actually risen. If membership in the local Parent-Teacher Association has fallen merely because there are fewer parents nowadays, we would not want to count that as evidence of civic decline. Conversely, if the number of lawyers in town doubled, while membership in the bar association grew by only 5 percent, it would be misleading to conclude that lawyers were becoming more active in professional affairs. In short, we generally should consider what economists call “market share”: What proportion of the eligible population takes part in any given activity?8

  One important (and, it turns out, highly controversial) instance of relative vs. absolute change is this: When examining changes in civic involvement, should we control for educational levels? The argument for doing so is simple and powerful. Education is one of the most important predictors—usually, in fact, the most important predictor—of many forms of social participation—from voting to chairing a local committee to hosting a dinner party. Moreover, educational levels of the American public rose very sharply during precisely the period of interest to us. So it seems logical to “control for” education, by asking, for example, about the civic involvement of the average college graduate. In effect, to control for education in this way is to assume that given the growth in educational levels, we should expect growth in civic involvement, and if we find declines relative to educational levels, that implies that some other factor must be depressing involvement. By analogy, if we found that vocabulary skills in America were steady or falling despite rising levels of education, we surely would look for some other factor (like TV, for example) that might have been simultaneously tending to depress literacy. At least until recently, to control for educational levels has been the conventional approach of social scientists in estimating changes in social and political participation.

  Recently, however, some scholars have pointed out that many of the sociological effects of education may themselves be relative, not absolute.9 If more people now have a college degree, perhaps the sociological significance of the credential has been devalued. Social status is, for example, associated with education, but we would not necessarily assume that just because more Americans are educated than ever before, America has a greater volume of social status than ever before. To the extent that education is merely about sorting people, not about adding to their skills and knowledge and civic values and social connections, it is misleading to “control for” educational change.

  There is no agreement among scholars on this issue. The core issue is whether (holding constant my own education) I am less likely or more likely to participate civically if those around me become more educated. In some cases the effect of education may be relative, so that (intimidated by the eggheads around me) I may be less likely to speak up at a public meeting in a college town than I would in a more normal community. In these cases the effect of education is mainly relative, and we should not expect that rising educational levels would push up participation. In other cases it seems likely that my propensity to participate will actually rise with the level of education of my neighbors. I am more likely to join a reading group, for example, if I live in a community with lots of other educated readers. In these cases we should expect that rising levels of education should push up participation rates even faster.

  Evidence uncovered in the course
of this research strongly suggests that the effects of education on social participation are typically absolute, not relative.10 My education increases my social participation, and generally speaking, your education does not lower my participation. So if we both graduate from college, we should both tend to become more civically engaged. Under these circumstances it would be appropriate to control for rising educational levels. However, doing so has the effect of amplifying declines in participation and minimizing increases, so given the nature of my argument, the more conservative course is not to control for education.

  In the analyses reported in this book I generally do control for changes in population, but I do not usually control for changes in the educational composition of the population. This rule of thumb stacks the cards against my hypothesis. The upshot is that the evidence presented in this book may well understate the gross decline in civic engagement in America over the last half century.

  Statistical controls are also relevant to another recurrent issue in this book, that of assessing causes and effects. Suppose that we are interested in the connection between TV watching and civic engagement, and suppose that we find that heavy TV watchers are rarely active in organizations. Before concluding that TV inhibits civic participation, however, we must consider other factors, such as social class, that might make this correlation spurious: perhaps working-class people watch more TV, whereas organizational leadership is monopolized by the middle class. One way to check this possibility is to control statistically for social class, in effect comparing the participation rates of people of the same class whose viewing habits differ.

  Statistical techniques such as multiple regression allow us to control simultaneously for many possible confounding variables, particularly when (as, fortunately, in our case) very large survey archives are available. Virtually every generalization in this book has been subjected to detailed statistical analysis of this sort, controlling simultaneously for age (or year of birth), gender, education, income, race, marital status, parental status, job status (working full-time, part-time, or not at all), and size of community of residence. In addition, where relevant, I controlled for other background factors, including year of survey, region, financial worries, homeownership, residential mobility (both past and anticipated), commuting time, general leisure activity, self-reported time pressure, self-reported health, and other factors. To be sure, controls of this sort, though necessary, are not always sufficient to rule out spuriousness. For this reason I have ensured that the data that underlie our conclusions will be readily available to other researchers, so that they can explore alternative interpretations.11 However, I have also undertaken due diligence myself in the analyses for this book to rule out obvious spuriousness. To keep complicated statistical apparatus from interfering with the presentation of my main conclusions, the graphs and charts here typically present the data without multivariate controls, but in each case I have also conducted extensive tests to be sure that the underlying relationship was not spurious.12

  One final cosmetic issue about the figures in this book: In every case I present every available data point. Often, however, short-term fluctuations obscure the longer-term trends. For example, figure 2 presents annual data from the Department of Commerce on the number of political organizations. Even a cursory examination of this chart, however, reveals a clear biennial rhythm (more organizations in election years), along with a few other deviations from the longer trend (such as the modest dip in 1995). In this, as in all other graphs, I show both a dotted line linking the actual data points and a darker, smoother curve that conveys the longer-term trend. The darker lines (calculated simply as the best-fitting polynomial curves) are intended to ease reading of the figures, but purists who prefer the unvarnished data may simply ignore the darker lines.

  What are the primary sources of our evidence? The two most widely used academic survey research archives for American social and political behavior are the National Election Studies (NES) and the General Social Survey (GSS). Virtually every two years since 1952, coinciding with national elections, the Survey Research Center of the University of Michigan has surveyed a sample of Americans about their political behavior (NES). Roughly every other year since 1974 the National Opinion Research Corporation at the University of Chicago has conducted a broadly similar set of surveys on social behavior (GSS). Both archives provide high-quality scientific evidence about changes in Americans’ attitudes and behavior, and I have relied on both archives in this book. For our purposes the utility of the NES is limited, however, for it focuses on elections and gives little attention to everyday civic participation. The GSS covers a wider range of activities, although in the domains most central to our interests its continuous coverage is largely confined to formal group membership, church attendance, and social trust. Fortunately, in the course of this research my colleagues and I have discovered several other important survey archives to supplement the GSS and NES.13

  Roughly ten times per year between September 1973 and October 1994 the Roper survey organization interviewed in person a national cross section of approximately 2,000 persons of voting age, yielding a survey archive of more than 410,000 respondents over more than two decades, the Roper Social and Political Trends data set.14 The sampling method (a multistage, stratified probability sample with quotas for sex, age, and employed women) remained essentially constant over the entire period. Many questions of social and political significance were asked repeatedly over this period, and our analysis here draws frequently on this archive. Not all questions were asked in all surveys, and thus our analysis of evidence based on the Roper Social and Political Trends archive is sometimes based on much less than the entire sample of 410,000 respondents. (I have noted in such cases the specific surveys in which the relevant questions appeared.) However, one crucially important set of questions relevant to civic engagement (summarized in table 1) appeared on every single survey along with standard demographic information, and this massive sample enables us to examine even forms of participation, like running for public office, that are quite rare.

  In the midst of this research my colleagues and I stumbled onto a second source of annual survey evidence on civic and social activities covering the last quarter of the twentieth century: DDB Needham Life Style surveys (DDB). Begun in 1975 and still continuing, these extraordinary surveys provide regular barometric readings on scores of social, economic, political, and personal themes, from international affairs and religious beliefs to financial worries and condom usage. With an annual sample of 3,500–4,000, this archive through 1999 contained more than 87,000 respondents over the last quarter of the twentieth century. To the extent that it can be shown to be methodologically reliable, the DDB Needham Life Style archive constitutes one of the richest known sources of data on social change in America in the last quarter of the twentieth century. Because of its novelty and importance, I present here some additional information about this archive.

  Each year since 1975 the DDB Needham advertising agency has commissioned Market Facts, a commercial polling firm, to question a national panel of American households about their consumer preferences and behavior.15 Most of the roughly twenty-page written questionnaire is taken up with inquiries about detergents, mutual funds, automobiles, and so on. However, every year a core set of questions has been posed about “life style” issues, including media usage, financial worries, social and political attitudes, self-esteem, and a wide range of social behavior, such as reading, travel, sports and other leisure activities, family life, and community involvement.

  From the point of view of DDB Needham’s commercial clients, these “life style” questions are valuable for planning marketing strategies, defining market niches, and drafting advertising copy. Are churchgoers more likely to send Christmas cards, for example? Are fast-food restaurants replacing the family dinner for two-career families? Are frequent movie-goers more liberal in their social attitudes? Are rock concert fans more likely than museum buffs to watch Monday Nig
ht Football?16 From the point of view of social science, however, the DDB Needham Life Style data provide an unparalleled source of information on trends in social behavior over the last two decades.

  However, the DDB Needham Life Style survey data are not without flaws. One important limitation is obvious and relatively easy to compensate for, but a second is more serious. The first is that until 1985 only married households were included in the sample. However, I have found few cases in which the observed trends between 1985 and 1999 differ significantly between married and single respondents, although in a number of cases there are modest differences in the levels. For example, married people attend church more often than single people do, while singles attend club meetings more often than married people, but the trends in both church- and clubgoing are essentially identical for the two groups. In all cases where this sampling peculiarity poses potential problems of analysis, I analyzed the data separately by marital status to confirm that the “missing 1975–84 singles” did not vitiate our substantive conclusions. Where the levels and/or trends in traits of interest vary by marital status, I have made an appropriate adjustment to track changes over the entire 1975–98 period.17

 

‹ Prev