by Ben Shapiro
And, of course, the creator of an organization that remains relevant down to our day favored eugenics as well: Margaret Sanger, founder of Planned Parenthood. Sanger wrote in 1921 that the question of sterilizing the disabled was “the most urgent problem today.”70 In one of her speeches, she suggested the sterilization or quarantining of some “fifteen or twenty millions of our population,” who would—she put it rather gently—“be organized into soldiers of defense—defending the unborn against their own disabilities.”71 She declared birth control “nothing more or less than the facilitation of the process of weeding out the unfit [and] of preventing the birth of defectives.”72 For her work, Sanger was nominated for the Nobel Peace Prize a staggering thirty-one times—twice as many times as Gandhi.73
Making a better world, in this view, meant living for the collective—and, if necessary, compulsion on behalf of the collective. The exposure of the Nazi eugenics program stopped the American progressive eugenicists cold. But the drive for a centralized program of living, as proposed by bureaucrats, has never died.
A WORLD IN ASH
In World War II, all three of these prominent collectivist worldviews came into direct conflict—and somewhere between fifty and eighty million people died. Romantic nationalism engulfed Nazi Germany, along with a worship of centralized bureaucracy and “scientific” governance—and six million Jews were mowed down by German bullets or gassed in death camps. The Soviet Union saw its own population as fodder for the preservation of the state, sending its citizens to die on the front lines of Stalingrad with no guns in their hands but guns at their backs. The United States interned 117,000 Japanese. By the end of the war, the great hope of the telos-free, Godless Enlightenment had faded from view—or, more precisely, it had been buried under the mountains of corpses from World Wars I and II.
The West was suddenly in crisis.
Despite massive technological improvements—and, in part, because of such improvements—the human race had nearly wiped itself off the planet. Science had not solved the search for purpose. In fact, with the discovery of atomic weaponry, it seemed that the West had come to the brink of its own annihilation. The great dream of redefining human beings, discovering transcendent values without reference to God or universal purpose, seemed to have died. While some still held out hope in the West for the eventual triumph of the Soviet experiment, with the revelation of Stalin’s crimes, that hope too faded.
What would replace that hope now?
Chapter 8
After the Fire
The world survived World War II, of course. Not only did the West survive—it got freer, richer, more prosperous than ever. Human wealth expanded exponentially. Life spans increased.
But there remained a hole at the center of Western civilization: a meaning-shaped hole. That hole has grown larger and larger in the decades since—a cancer, eating away at our heart. We tried to fill it with the will to action; we tried to fill it with science; we tried to fill it with world-changing political activism. None of it provides us the meaning we seek.
By the end of World War II, European optimism was dead and buried beneath six feet of human ash. The philosophies of the Europeans—Enlightenment ideals about the value of human beings and the need to move beyond God or Greek teleology—had ended in tragedy. Hitler claimed ideological forebears in Kant, Hegel, and Nietzsche;1 Stalin took his cues from Marx; the eugenicists took their ideas from Darwin and Comte. The post-Locke Enlightenment project had been a Tower of Babel, with the common goal of supplanting reason for religion rather than seeking the congruence of the two. As the tower began to challenge God, its builders went to war with one another, speaking languages all their own.
And then the tower fell, and the land was left barren.
Europe had buried millions of its sons and daughters; the West had placed its bets on mankind, and reaped the whirlwind.
But God did not return. Magna Carta, the first great charter of Western liberties, was signed by King John in 1215, and set limits to monarchic powers based on “regard to God and for the salvation of our soul, and those of all our ancestors and heirs, and unto the honour of God and the advancement of his holy Church.” Religious practice remained the norm in Europe until the advent of the middle of the twentieth century. Then, as the children born around World War II reached adolescence, religious observance plummeted.2
Faith in human reason, too, had waned. After the catastrophic insanity of not one but two Great Wars, the Biblical warning not to place faith in princes had been proved prescient. The Enlightenment hope in mankind’s collective capacity to better itself had collapsed.
Without God, and without the collective, all that was left were individuals. Alone.
Thus, the philosophy of existentialism came to the fore.
Existentialism truly began in the nineteenth century with Søren Kierkegaard (1813–1855), a Danish philosopher bothered by the problem of Enlightenment reason, which he saw as arrogant—the notion that a universal ethical system could be discerned by human beings was a fool’s errand, the idea that history was an unerring unfolding of Hegelian dialectics far too simplistic. The universe was cold and chaotic—man’s search for meaning could not begin with an attempt to look outward for that meaning. Kantian universalism was too hopeful, Comtian scientism far too self-assured.
Instead, Kierkegaard posited that human beings had to find meaning by looking within. The system by which one chooses to live is a leap of faith—but in that leap, man finds his individual meaning. “Subjectivity is the truth,” Kierkegaard wrote. “Objectively there is no infinite decision or commitment, and so it is objectively correct to annul the difference between good and evil as well as the law of noncontradiction and the difference between truth and untruth.” Truth can be found in ourselves.3 To Kierkegaard, this meant making the leap of faith to believe in a God beyond man-made ethics—his famous “teleological suspension of the ethical.” Kierkegaard focused on passion as opposed to reason—he deemed passion the most important driving force in life, and concluded, “The conclusions of passion are the only reliable ones.”4 He hoped, of course, that the passionate leap would be toward the Christian God. But his belief system would lead not to God, in the end, but to worship of subjectivity.
If truth lay in the self, then all moral truth automatically became a matter of subjective interpretation. This was the view of Nietzsche, who stated that the greatest man would be he “who can be the most solitary, the most concealed, the most divergent, the man beyond good and evil, the master of his virtues, and of superabundance of will; precisely this shall be called greatness; as diversified as can be entire, as ample as can be full.”5
But all truth was subjective, according to the existentialists, not merely moral truth. This was the view of Karl Jaspers (1883–1969), a German philosopher who wrote, “All knowledge is interpretation.”6 It was also the view of Martin Heidegger (1889–1976), who suggested that the essence of being human was being—not reason or passion, but existence. While Descartes had suggested that proof and meaning in human existence could be predicated on thinking—“I think, therefore I am”—Heidegger contended that our identity was wrapped up purely in existence itself. What did this mean practically? It meant mostly deconstructing ancient notions of eternal truths and human reason going all the way back to Plato and Aristotle. What would fill the gap? Authenticity—the true self, contemplating its own death and the meaninglessness of the universe, “taking hold of itself.”7 Heidegger prophesied a time “when the spiritual strength of the West fails and its joints crack, when this moribund semblance of a culture caves in and drags all forces into confusion and lets them suffocate in madness.” He openly preached the power of will, and saw a choice between “the will to greatness and the acceptance of decline.”8 Heidegger’s extension of this individual idea—taking hold of oneself and our place in the world—may have led to his original association with the Nazis.9
The truest expositor of existentialism, however, was Je
an-Paul Sartre (1905–1980). Politically, Sartre was a committed Marxist; he spent his life bouncing between support for Soviet communism and Maoism. But his philosophical contributions lay more precisely in the realm of the individual. According to Sartre, unlike both the ancients and the Enlightenment philosophers, existence precedes essence: in other words, we are born, and then constantly remake ourselves in the face of the world, rather than being subject to the dictates of human nature. There is no sure good or evil; there is merely the world we are granted, and it is our job to make and remake ourselves, utilizing our freedom to do so. So Sartre writes:
Nowhere is it written that the Good exists, that we must be honest, that we must not lie; because the fact is we are on a plane where there are only men. . . . If existence really does precede essence, there is no explaining things away by reference to a fixed and given human nature. In other words, there is no determinism, man is free, man is freedom. On the other hand, if God does not exist, we find no values or commands to turn to which legitimize our conduct. So, in the bright realm of values, we have no excuse behind us, nor justification before us. We are alone, with no excuses. That is the idea I shall try to convey when I say that man is condemned to be free. Condemned, because he did not create himself, yet, in other respects is free; because, once thrown into the world, he is responsible for everything he does.10
This is a beautifully expressed idea—an idea replete with the tragedy of existence, but hopeful about man’s possibility of reaching within himself for something higher. But it also leaves human beings without a guidepost. It promises no communal purpose or communal capacity; it focuses almost entirely on the individual, but leaves individuals without any guide other than the guide within. Furthermore, Sartre’s belief in an unfixed human nature opens the door to utopian schemes of all sorts—if we can merely change the system, as Marx argued, perhaps the New Man will arrive, cloaked in glory.
THE NEW “NATURAL LAW”
While Enlightenment worship of reason may have ended in tears during the first half of the twentieth century, its continued faith in science was amply rewarded. There is no question that the pace of scientific discovery rapidly increased in the period following the Enlightenment, with the average life expectancy in Europe in 1850 at 36.3 years old; by 1950, that number had nearly doubled, to 64.7.11
Science was the future.
The philosophy of scientific government had resulted in the horrors of two World Wars and the specter of centralized, tyrannical government. But that science could free mankind was still in the offing in the postwar period. And why not? As John F. Kennedy put it in one of his last speeches in 1963, “Science is the most powerful means we have for the unification of knowledge, and a main obligation of its future must be to deal with problems which cut across boundaries, whether boundaries between the sciences, boundaries between nations, or boundaries between man’s scientific and his humane concerns.”
The focus on science had radically shifted. Science had begun, in the Francis Bacon philosophy, as an aid toward the betterment of man’s material conditions; it had morphed over time into an aid toward the betterment of man’s moral condition, though not the source of morality itself. But now, with God out of the picture and the collective implicated in the worst crimes in human history, science was handed the task of creating a new morality, a new law. The existentialists had reduced human purpose to creation of subjective truth; science provided the last remnant of objective truth in Western thought.
Nature, then, was the answer; investigation of nature became the purpose.
The legacy of Western thought had relied on natural law—the idea of universal purposes discernible in the universe through the use of reason. Nature was seen not as a justification for behavior, but as a hint toward a broader pattern in creation: things were designed with a purpose, and it was the job of free human beings to act in accordance with right reason in achieving that purpose. What we ought to do was inherent in what is: a hammer was made for hammering, a pen for writing, and a human for reasoning. Human beings could reason about the good, and then shape the world around them to achieve it.
Now, however, a new form of natural law came to fruition: the belief that whatever occurred in nature was “natural,” and therefore true. This was a far cry from the original “natural law”; it said that human beings were animals, and that their purpose was to act according to their instincts, not their reason.
But the newfound faith that science would take us to the stars and beyond was about to collapse. For while the ancients had counted on human reason to allow us to freely seek and find moral truths, while Judeo-Christian teachings had called on man to use reason to find God and free will to follow Him, science now undermined reason and will.
The first serious advocate of the position that human beings were no longer rational, free actors came from Sigmund Freud (1856–1939). Freud was a charlatan, a phenomenal publicist but a devastatingly terrible practicing psychologist. He was a quack who routinely prescribed measures damaging to patients, then wrote fictional papers bragging about his phenomenal results. In one 1896 lecture, he claimed that by uncovering childhood sexual trauma he had healed some eighteen patients; he later admitted he hadn’t cured anyone. Freud himself stated, “I am actually not at all a man of science, not an observer, not an experimenter, not a thinker. I am by temperament nothing but a conquistador—an adventurer, if you want it translated—with all the curiosity, daring and tenacity characteristic of a man of this sort.”12
But Freud’s radical theories about human nature became world famous. He submitted that religion was but a form of “childhood neurosis” from which the world had to recover. He suggested that the roots of religion lay in an ancient event during which a group of prehistoric brothers had killed their father. Dreams were a form of wish fulfillment, behavior was a manifestation of unconscious desires; in general, people were governed by forces beyond their control. Mirroring Plato, Freud posited a tripartite soul—Plato suggested reason, spirit, and appetite, while Freud suggested superego (moral reason), ego (life experience militating between appetite and reason), and id (appetite). But where Plato suggested that man should work to ally spirit with reason to overcome appetite, Freud suggested that working to uncover unconscious forces shaping our id would be the best possible solution. In other words, Freud believed that we were all governed by forces we couldn’t understand, absent psychoanalytic intervention.
Freud’s heavy focus here was on sexual neurosis. And while Freud thought that sexual neurosis could be sublimated—energies rechanneled toward more fruitful pursuits—it was only a short step to rejecting that sublimation in favor of freeing us from neurosis through sexual profligacy. Thus Alfred Kinsey (1894–1956) entered the public eye, riding a wave of enthusiasm for promiscuity and excuse making for it. Kinsey was a zoologist at Indiana University fascinated with the supposed hypocrisy of repressed Americans. Kinsey believed, unlike Freud, that human beings could only be freed by throwing off the shackles of Judeo-Christian morality; he was contemptuous of Freud’s theories and, according to biographer and coworker Wardell Pomeroy, “shocked by the moral judgments Freud constantly made.”13
In 1948, Kinsey came out with his groundbreaking book Sexual Behavior in the Human Male; five years later he returned with Sexual Behavior in the Human Female. These supposedly rigorous studies found that 85 percent of men and 48 percent of women had had premarital sex, and half of men and four in ten women had cheated on their spouses. According to Kinsey, nearly seven in ten men had slept with prostitutes, 10 percent had been homosexual for a prolonged period of time, and 17 percent of males on farms had pursued sex with livestock. Kinsey also claimed that 95 percent of single women had had abortions, along with 25 percent of married women. Americans were, Kinsey argued, a rather bawdy lot.
The first book flew off the shelves, selling two hundred thousand copies in two months alone.
But the science Kinsey pursued was deeply flawed. As journalist Sue
Ellin Browder explains, Kinsey’s statistics weren’t reflective of reality, because his sample wasn’t reflective of reality: of his original 5,300 white male sample, at least 317 were sexually abused minors, “several hundred” male prostitutes, and hundreds were likely sex offenders in prison when they were interviewed. The interviewees were also self-selected, and those who opt into such studies tend to be more sexually profligate. Kinsey used similarly terrible methodology when surveying women. No wonder the chairman of the University of Chicago committee on statistics, W. Allen Wallis, scoffed at Kinsey’s “entire method of collecting and presenting the statistics.”14
But the reality of Kinsey’s methodology mattered less than his implicit promise: human beings could be bettered by casting aside the vestiges of the old morality. And the best news of all was this: it was all natural. No more struggling to seek the natural law; no more utilizing reason to hem in biological urges. By becoming animal, we could become more free. If it felt good, not only should we do it, we had a biological imperative to do it. Forget striving for existential meaning—we could all find truth by being ourselves. This was Rousseau’s argument for the noble savage taken to its biological extreme.
And the basis for that argument would only grow stronger in the scientific community. Scientists would soon argue that the capacity for free choice itself was no longer present—that we were automatons, slaves to our biology, robots deceived by the sophisticated outgrowth of our own neurocircuitry.
Harvard professor E. O. Wilson was perhaps the greatest advocate of this position: he posited that human beings had inescapable programming that made us behave in certain ways in response to our environment. Furthermore, through investigation of the interaction of that innate nature and the environment, we could fully predict human behavior. Culture was merely an outgrowth of that interaction: Darwinian evolution ruled the roost. Wilson called his theory of everything sociobiology. Sociobiology, he said, could provide the great “consilience” of science, merging neuroscience and evolutionary biology and physics—all of science—into a cohesive whole.