Book Read Free

A Patriot's History of the United States: From Columbus's Great Discovery to the War on Terror

Page 121

by Larry Schweikart


  The ensuing recession shattered the underlying premises of Keynesian economics once and for all. According to the prevailing Phillips Curve theory, an economy could not have both high unemployment and high inflation at the same time. This stemmed from the notion that inflation resulted from government spending for new jobs. It was all poppycock. The government could not create wealth in the 1970s any more than it could in the 1930s. More accurately, a bizarre expectations game occurred—“taxflation”—wherein businesses sensed that when new government programs were announced, their taxes would go up, and they responded by hiking prices merely in anticipation of the new taxes.

  Gerald Ford possessed none of the qualities needed to deal with any aspect of the sinking economy. As a progressive (so called moderate) Republican, he sympathized with much of the Great Society spending. As a caretaker president, he did not possess the public support to force OPEC to increase production or lower prices. And, as a Nixon appointee, he faced a hostile and rogue Congress out to destroy all vestiges of the modern Republican Party much the way the Radical Republicans in Reconstruction had hoped to kill the Democratic Party. All Ford had in his favor was honesty, but his lack of imagination left him helpless in the face of further business declines. Having no desire for tax cuts that might revive the economy—and blocked by a spendthrift Congress that would not enact tax cuts anyway—Ford launched a campaign that was almost comedic in design. He sought to mobilize public support to hold down prices by introducing win (Whip Inflation Now) buttons.

  The damage done to the American economy by almost a decade of exorbitant social spending, increasing environmental and workplace regulations, and Keynesian policies from the Johnson-Nixon administrations cannot be overstated. Yet just as government almost never gets the credit when economic growth occurs, so too its overall impact on the nation’s business health must be tempered with the appreciation for the poor planning and lack of innovation in the corporate sector. All that, combined with the impact of greedy union demands in heavy industry, made any foolish policies of government relatively insignificant. Perhaps worst of all, inflation had eroded earnings, creating new financial pressures for women to work.

  Sex, the Church, and the Collapse of Marriage

  By the mid-1970s, women’s entry into the workforce was being championed by the twentieth-century feminist movement. Armed with the Pill, feminists targeted the seeming lack of fairness in the job market, which punished women for dropping out of the market for several years to have children. But where their rhetoric failed to change corporations, the Pill changed women. By delaying childbirth, the Pill allowed women to enter professional schools in rapidly growing numbers.27

  Paradoxically, the Pill placed more pressure on women to protect themselves during sex. And rather than liberating women for a career in place of a family, feminism heaped a career on top of a family. Women’s workloads only grew, and the moral burden on women to resist the advances of males expanded geometrically. Equally ironic, as women entered the workforce in greater numbers, increasing the expectation that young married women would work, the “expectations index” for couples soared. Newlyweds saw larger houses with bigger mortgages and more upscale cars as the norm because, after all, they had two incomes. This created a feedback loop that forced women to remain in the labor force after childbearing, and in many cases, after they wished to leave their jobs outside the home.

  With all biological consequences removed from engaging in pre-or extramarital sex, the only barrier remaining was religion. But the church had seen much of its moral authority shattered in the 1960s, when on the one hand, large numbers of white Christians had remained mute during the civil rights struggles, and on the other, liberal-leftist elements in the church had associated themselves with communist dictators. In either case, the church (as many saw it) had allied itself with oppression against liberty.

  When it came to sex, traditional Christianity had appeared hypocritical. Wives who dutifully served their families had to deal with alcoholic husbands and domestic abuse. (It is crucial to understand that perception is reality, and while the vast majority of husbands loved and served their wives, the fact that any domestic abuse occurred without comment from the local pulpit or a visit from the pastor or priest was unacceptable.) The more traditional and fundamentalist churches that preached against divorce and railed against premarital sex, said little to nothing about spouse or child abuse in their own congregations. At the other extreme, churches attempting to reach out to women and portray themselves as modern opened their pulpits to female ministers but ignored the moral necessity of demanding chastity and commitment from Christians in sexual matters. The former group preached piety and practiced unacceptable toleration of sin, while the latter celebrated its toleration, but ignored holiness.

  For all their failures to stand up for women’s and civil rights, the hard-line fundamentalist churches nevertheless continued to grow throughout the 1960s and early 1970s, if for no other reason than it was clear they stood for clear and unwavering principles. Liberal Protestant denominations, on the other hand, shrank at shocking rates, despite their new inclusion policies. American Baptists, Disciples of Christ, Episcopalians, Lutherans, Methodists, Presbyterians, and United Church of Christ all lost members from 1960 to 1975. Although people claimed that they stayed home on Sunday because they thought churches were behind the times, those churches most behind the times seemed to thrive.28 Southern Baptists nearly doubled in numbers of churches between 1958 and 1971. Even more astounding, televangelist Oral Roberts, a charismatic faith healer, reached 2.3 million viewers in a single broadcast, a number that nearly equaled the entire estimated Sunday attendance of all the Methodist churches in the nation! When one combined the audiences of the top eight televised ministries during a single broadcast—virtually all of them outside the mainstream—they matched that of the estimated church attendance of the top six denominations combined.29 Indeed, one scholar, looking at these viewing trends, concluded that the numbers “reveal a previously unmapped dimension of religion in America, a basic fundamentalist orientation that cuts across denominational lines.”30

  Transcending denominational differences went so far as to see Protestants and Catholics beginning to share similar worship experiences. The Roman Catholic Church had modernized with its Vatican II (1962–65) Council, making such changes as having the priest face the congregation for mass (which now was said in English, not Latin). Nevertheless, the Catholic Church at midcentury witnessed a stunning decline, not only in members, but in clergy. American nuns decreased in number by 14,000 in the last half of the sixties, and by 1976, 45,000 priests and nuns had abandoned the cloth. (Among the vanguard order, the Jesuits, recruitment numbers plummeted by 80 percent in the 1990s.) Although American Catholicism revived in the latter part of the twentieth century because of immigration, it still had difficulty attracting young men to its seminaries and women to its convents. A subculture of homosexuality began to permeate the orders, driving out heterosexual men who vowed to remain celibate, creating what was called the “gaying and graying” of the Church. (In 2002, this exploded as a full-scale scandal when several lawsuits were brought against many priests for sexual abuse and pedophilia, threatening to bankrupt the Boston Catholic Church. But the Church claimed to be unaware of these offenses.) One subgroup within the Catholic Church, however, the Catholic Charismatic movement, nevertheless showed strong gains. It did so because it intersected with the Protestant “faith movement” with spiritual expressions such as speaking in tongues, healing, and other supernatural outpourings.

  Clearly a major split had occurred in the American church from 1960 to 1975, and it did so rather rapidly. In 1962, for example, some 31 percent of those polled said that religion was losing its influence, almost twice as many as had answered the question affirmatively five years earlier.31 At nearly the same time, the Federal Communications Commission had begun allowing television and radio stations to use paid programming to fulfill their public se
rvice broadcasting requirements. This silenced the mainstream liberal churches, which suddenly found that despite a decade’s worth of monopoly over the airwaves, they could no longer generate enough funds to purchase airtime.32 Instead, the so-called fundamentalist and Pentecostal denominations “dominated the airwaves, having ‘honed their skills at the fringes of the industry.’”33 Indeed, Pentecostal denominations—and even nondenominational congregations led by individual charismatic (in all senses of the word!) leaders—soared in membership. Oral Roberts, the best known of the “faith” ministers, built an entire university and hospital in Tulsa, Oklahoma, and some of his students, such as Kenneth Copeland, or contemporaries, such as Kenneth Hagin, developed powerful healing and “prosperity” ministries.

  Traditional churches, of course, despised the charismatics, whose main identifying practice was that of speaking in tongues under the influence of the Holy Spirit. These charismatic churches featured several characteristics that antagonized the mainstream churches: they were racially integrated, they crossed all economic lines, and women had prominent positions of power in the organizations. Faith healer Katherine Kuhlmann, for example, had been the first female with a regular religious show on television. Discrimination still existed, and many of the Pentecostal denominations still frowned on interracial marriage. But compared to the mainstream church, the underground Pentecostal movement was remarkably free of racism, classism, and gender bias.

  Instead of moving in the direction of their flocks, mainstream churches sought to be inclusive, leading them to reinterpret church practices in a number of areas. Most of the mainstream churches, for example, started to ordain women by the 1970s. These female ministers in liberal Protestant churches “tended to side with radical feminists on the most volatile issues of the day,” especially abortion.34 Scarcely had the ink dried on the key sexual reproduction case of the century, Roe v. Wade (1973), than the Methodists, Presbyterians, American Baptists, United Church of Christ, Disciples of Christ, and Episcopal Church all adopted proabortion positions.35

  In 1973 the U.S. Supreme Court, hearing a pair of cases (generally referred to by the first case’s name, Roe v. Wade), concluded that Texas antiabortion laws violated a constitutional “right to privacy.”36 Of course, no such phrase can be found in the Constitution. That, however, did not stop the Court from establishing—with no law’s ever being passed and no constitutional amendment’s ever being ratified—the premise that a woman had a constitutional right to an abortion during the first trimester of pregnancy. Later, sympathetic doctors would expand the context of health risk to the mother so broadly as to permit abortions almost on demand. Instantly the feminist movement leaped into action, portraying unborn babies as first, fetuses, then as “blobs of tissue.” A battle with prolife forces led to an odd media acceptance of each side’s own terminology of itself: the labels that the media used were “prochoice” (not “proabortion”) and “prolife” not “antichoice.” What was not so odd was the stunning explosion of abortions in the United States, which totaled at least 35 million over the first twenty-five years after Roe. Claims that without safe and legal abortions, women would die in abortion mills seemed to pale beside the stack of fetal bodies that resulted from the change in abortion laws.

  For those who had championed the Pill as liberating women from the natural results of sex—babies—this proved nettlesome. More than 82 percent of the women who chose abortion in 1990 were unmarried. Had not the Pill protected them? Had it not liberated them to have sex without consequences? The bitter fact was that with the restraints of the church removed, the Pill and feminism had only exposed women to higher risks of pregnancy and, thus, “eligibility” for an abortion. It also exempted men almost totally from their role as fathers, leaving them the easy escape of pointing out to the female that abortion was an alternative to having an illegitimate child.

  Fatherhood, and the role of men, was already under assault by feminist groups. By the 1970s, fathers had become a central target for the media, especially entertainment. Fathers were increasingly portrayed as buffoons, even as evil, on prime-time television. Comedies, according to one study of thirty years of network television, presented blue-collar or middle-class fathers as foolish, although less so than the portrayals of upper-class fathers.37

  Feminists had unwittingly given men a remarkable gift, pushing as they had for no-fault divorce. Divorce laws began changing at the state level in the early 1970s, at which time a full court hearing and proof of cause was no longer required. Instead, if both parties agreed that they had irreconcilable differences, they simply obtained an inexpensive no-fault divorce. This proved a boon for men because it turned the social world into an “arena of sexual competition [making] men and women view each other as prey and their own sex as competitors to a degree that corrodes civility.”38 Divorce rates skyrocketed, with more than 1.1 million divorces occurring in 1979, and the number of children under the age of eighteen who lived in one-parent families rising during the decade of the 1970s from 11 percent to 19 percent. Although it took about twenty years for sociologists to study the phenomena, scholars almost universally agreed by the 1990s that children of one-parent families suffered from more pathologies, more criminal behavior, worse grades, and lower self-esteem than kids from traditional families.39

  A husband, able to easily escape from matrimony because it no longer proved fulfilling (or more likely, he desired a younger woman), now only had to show that he could not get along with his current wife—not that she had done anything wrong or been unfaithful. In turn, the process called the “one-to-a-customer” rule by George Gilder in his controversial Men and Marriage was instantly killed.40 In its place, wealthy older men could almost always attract younger women, but older women (regardless of their personal wealth) usually could not attract men their age, who instead snapped up younger “hard bodies” (as these physically fit women were called). Whereas in previous eras these younger women would have married middle-or lower-class men of roughly their own age and started families, they now became prey for the middle-aged wealthy men with “Jennifer Fever.” The new Jennifer herself only lasted temporarily until she, too, was discarded.41 Pools of available middle-aged women, often with money, and younger men may have seemed to sociologists to be a logical match, but in fact the two groups were biologically and culturally incompatible. Instead, young men with looks and vitality battled against older males with money and status for the affections of an increasingly smaller group of twenty-to thirty-year-old females, while an army of middle-aged female divorcées struggled to raise children from one or more marriages.

  Feminism’s sexual freedom thus placed older women into a no-win competition with the young, and in either group, the losers of the game had limited options. The process slowly created an entirely new class of females who lacked male financial support and who had to turn to the state as a surrogate husband, producing one of the most misunderstood political phenomena of the late twentieth century, the so-called gender gap. In reality, as most political analysts admit, this was a marriage gap. Married women voted Republican in about the same numbers as men did; it was only the single mothers, as casualties of the sexual free-fire zone the feminists had dropped them into, who saw government as a savior instead of a threat.

  Meanwhile thanks to the women’s movement, a new option opened to women in the 1970s that had not existed to the same degree in earlier decades: a career. Women applied to law and medical schools, moved into the universities, and even gradually made inroads into engineering and the hard sciences.

  Inflation had actually smoothed the way for women to enter professional fields. As families struggled and a second income was needed, men usually preferred their wives to have the highest paying job possible. Personnel directors and company presidents—usually men—could not help but appreciate the fact that many of their own wives had started to interview for jobs too. Over time, this realization helped override any macho chauvinism they may have possessed.


  The working female brought a new set of economic realities. First, large numbers of women entering the workforce tended to distort the statistics. There is no question that more overall activity occurred in the economy as women now spent more on housekeepers, gas, fast food, and child care. But there were also hidden economic indicators of social pathology: suicides were up, as was drug use, alcoholism, clinical depression, and, of course, divorce. What went almost unnoticed was that productivity—the key measure of all wealth growth—fell steadily throughout the 1970s, despite a tidal wave of women entering the job markets.

  As more people moved into the workforce, there were increased family pressures for child care. Once touted as the salvation of the working family, child care has been shown to be undesirable at best and extremely damaging at worst.42 Moreover, with both parents coming home around six in the evening, families tended to eat out more, send out laundry more often, and pay for yard or other domestic work as well as additional taxes, all of which chewed up most of the extra cash brought in by the wife’s job. Although by 1978 the median family income had risen to $17,640, up from $9,867 in 1970, that increase had occurred by having two adults, rather than one, in the workplace. Children increasingly were viewed as impediments to a more prosperous lifestyle and, accordingly, the number of live births fell sharply from what it had been in 1960.

  All of these indicators told Americans what they already had known for years: the great prosperity of the 1950s had slowed, if not stopped altogether. Social upheaval, which was excused in the 1960s as an expression of legitimate grievances, took on a darker tone as illegitimacy yielded gangs of young boys in the inner cities, and in suburban homes both parents worked forty hours a week to stay afloat. It would provide the Democratic challenger to the presidency with a powerful issue in November 1976, even if no other reasons to oust Ford existed. But events abroad would prove disastrous for Gerald Ford.

 

‹ Prev