The Cave and the Light
Page 62
At the stroke of their pens, Madison and the Constitutional Convention of 1787 had at last devised a system that overcame the oldest objection of all to free government, that it could work only on a scale the size of Aristotle’s Greek polis, perhaps five thousand people in all. The United States was now designed in such a way that the larger it grew, the freer it became. Democracy was finally able to embrace diversity and conflict instead of shunning it. What Rousseau feared as modern liberty’s greatest weakness, its reliance on self-interest, turned out to be its hidden strength. And Plato’s shame, the relentless pace of temporal change, was revealed to be Aristotle’s glory.
For if the statesmen of the early American Republic anticipated the perpetual clash of interests within the halls of government, they expected just the opposite outside. The ruling American public ethos promised a more coordinated meshing of men’s private desires and their public obligations, in the dynamic balance between the American instinct for individualism and the impulse for voluntary associations. It struck John Stuart Mill’s friend Alexis de Tocqueville when he first visited the United States in 1831. Tocqueville saw how America offered strange proof of how Aristotle’s zoon politikon, left to his or her own devices (he noted that the social freedom of American women was one of the glues of American culture), could create a sense of virtuous community purpose equal to anything Plato wanted to cultivate.
“Americans make associations to give entertainments,” Tocqueville wrote in Democracy in America, “to found seminaries, to build inns, to construct churches, to diffuse books,” to create hospitals, schools, and prisons, and to send missionaries to the tropics. Americans had learned that in a democracy, “individuals are powerless, if they do not learn voluntarily to help one another.”10 However, once they join together, “from that moment they are no longer isolated men, but a power seen from afar, whose actions serve for an example, and whose language is listened to.” One such voluntary national movement, the temperance movement, would bring about the Eighteenth Amendment. Another, the abolition movement, would trigger the Civil War and end slavery in the United States.11
Tocqueville saw that some in Europe like Hegel argued that the way to help the individual overcome a sense of powerlessness in modern society was to increase the powers of government. Tocqueville believed this was a mistake. Such a move would destroy the motivation for volunteerism and the impulse for drawing together for a common purpose. “If men are to remain civilized,” he concluded, “or to become so, the art of associating together must grow and improve in the same ratio in which the equality of conditions is increased.”12
Certainly nothing displayed the power of this “art of associating” better than American business. “Today it is Americans who carry to their shores nine-tenths of the products of Europe,” Tocqueville wrote in 1832. “American ships fill the docks of Le Havre and Liverpool, while the number of English vessels in New York harbor is comparatively small.” Already by 1800 the United States had more business corporations than all of Europe put together. In short, American business was business almost before its founding.13
For Tocqueville and other foreign observers, American business embodied an energy that pulled down barriers social as well as geographic, as commerce and industry spread from New England across the Northeast; wealth was “circulating with inconceivable rapidity, and experience shows that it is rare to find two succeeding generations in the full enjoyment of it.” Business was the new republic’s most valuable renewable resource: indeed, “Americans put something heroic into their way of trading” that Tocqueville found fascinating and that he saw as in deep contrast to the slaveholding society of the South, where slavery “enervates the powers of the mind and benumbs the activity of master and slave alike.”14
Tocqueville was struck by another important balance Americans had achieved, between the push for material progress and enlightenment and their evangelical Protestant roots. Never had Tocqueville seen a country in which religion was less apparent in outward forms. Still, it was all-pervasive, “presenting distinct, simple, and general notions to the mind” and culture, including to American Catholics.15 This supplied a sense of moral solidarity borrowed from Luther and Calvin, which a democratic society built on the Enlightenment pattern might otherwise have to do without. “Belief in a God All Powerful wise and good,” Madison wrote in 1821, “is so essential to the moral order of the World and to the happiness of man that arguments that enforce it cannot be drawn from too many sources”—including those, like Newton’s, that found in nature the existence of nature’s God.16 Thomas Jefferson agreed. The author of the Declaration of Independence is the famed progenitor of the idea of separation of Church and State. He was a confirmed Deist, and while he deeply admired the figure of Jesus, he was also deeply suspicious of organized religion as the enemy of liberty. He saw it as another dangerous offshoot of Plato’s baneful influence on the West.17
The answer was complete religious freedom, including the freedom not to believe. “It does me no injury,” he wrote in Notes on the State of Virginia, “for my neighbor to say that there are twenty gods or no gods. It neither picks my pocket nor breaks my leg.” Yet it was also Jefferson who wrote, “Can the liberties of a nation be thought secure when we have removed their only firm basis, a conviction in the minds of people that these liberties are the gift of God?” Later he added, “No nation has ever yet existed or been governed without religion. Nor can be.”18
So from the start the United States found itself with a constitution founded on a permanent clash between the executive, legislative, and judicial branches and between federal power and states’ rights (epitomized by the fierce ideological battles between Jefferson and Alexander Hamilton in the early decades of the Republic) and a sectional split between a commercial-minded North and a slave-owning South. It was also a society delicately balanced between individualism and volunteerism and between a business- and engineer-centered culture of focused practicality and a religious evangelism bordering on mysticism. Clearly, some kind of conceptual glue was going to be needed to hold all these disparate elements together if the new republic was going to survive.
For nearly seventy years, Americans found it in the ideas of Common Sense Realism. It was yet another product of the Scottish Enlightenment, but one with a firmer impress of both Protestant Christianity and Plato’s idealism. In America, its principal spokesman was John Witherspoon, longtime president of Princeton University, signer of the Declaration of Independence, and mentor to an entire generation of American politicians and statesmen, among them James Madison.
The Common Sense philosophy (as it was also called) was a shrewd fusion of an empiricism borrowed from Locke and Aristotle and a moral intuitionism—the idea that the human mind has direct access to truths that the senses cannot reach—that can be traced back to Plato. Thomas Jefferson became a convert to it. Thanks to Witherspoon it became the reigning philosophy in every Protestant seminary of note in America. Its assumptions shaped American education from one-room country schoolhouses to Harvard Yard. It shaped American legal thinking from the moment the United States Supreme Court opened its doors (John Marshall was strongly influenced by it). Indeed, from the Constitutional Convention until the Compromise of 1850, Common Sense Realism was virtually the official creed of the American Republic.19
So what was it? Its founder, Thomas Reid, was part of the empirical tradition that flowed from Aristotle and John Locke, that all knowledge comes from experience. However, Reid made an important alteration to Locke’s theories. Reid said that the mind is not an entirely blank tabula rasa but comes equipped with a set of “natural and original judgments” that enables human beings to separate out internal sensations arising within their own minds from sensations arising from an outside world.
In other words, we know automatically when we see a pencil in a glass of water that it isn’t really bent even though it appears to be, just as we know someone’s trying to pick our pocket even though he says he’s o
nly helping us on with our coat—and that there’s a difference between good and evil even when certain philosophers say there isn’t.
Reid called this power of judgment “common sense” because it is common to all human beings. Our common sense allows us to distinguish fantasy from reality and truth from falsehood and tell black from white and right from wrong—not by seeing the world as a series of mental images, but by interacting with it through mental acts. This power of judging is what enables us to live more fully in the real world, and the beliefs of common sense “are older and of more authority,” Reid wrote, “than all the arguments of philosophy.” Common sense tells us that the world consists of real objects that exist in real time and space. It also tells us that the more we know about those objects through our experience, the more effectively we can navigate our way through that reality.
More than any previous philosophy, Common Sense Realism had a built-in democratic bias, one reason it was so popular in America. The power of common judgment belongs to everyone, rich or poor, educated or uneducated; indeed, we exercise it every day in hundreds of ways. Of course, ordinary people make mistakes—but so do philosophers. And sometimes they cannot prove what they believe is true, but many philosophers have the same problem. On some things, however, like the existence of the real world and basic moral truths, they know they don’t have to prove it. These things are, as Reid put it, self-evident, meaning they are “no sooner understood than they are believed” because they “carry the light of truth itself.”
Common sense man turned out to be the enemy of more than just moral relativism. Madison’s constitution had ensured that countervailing interests would jam the political doorway, allowing no one to get his agenda through without facing the opposition of others. How to sort it out? The answer was “that degree of judgment which is common,” as Reid put it, “to men with whom we can converse and transact business.” Common sense would enable people to agree on certain fundamental priorities and truths, so that a solution can be worked out, whether it’s over a Supreme Court nomination or a tariff issue or whether America should go to war. In a democratic America where no one was officially in charge, not even philosophers, common sense would have to rule.
But what if it didn’t rule? In 1860, it collapsed on the issue of slavery. Reasonable Americans, men who conducted business in Washington and elsewhere on those common sense principles every day, saw the same disaster looming: the secession of the southern states. All agreed it would be a disaster, many appealed to the many compromises struck since 1820 to make the issue go away. Yet no one could do anything to stop it. Some on both sides even welcomed it.
It took Abraham Lincoln to realize that abolitionists like William Lloyd Garrison had seen a higher truth that a common sense man like Stephen Douglas failed to recognize: Slavery wasn’t just a national embarrassment or source of sectional friction. It was a deep and pervasive national sin. Lincoln was a prairie product of the American Enlightenment, a reader of Locke and Mill as well as Jefferson and the Founding Fathers. But Lincoln also believed in an Old Testament God who would make the nation pay a terrible price for selling human beings as chattel. Says Saint Paul’s letter to the Hebrews (9:22), “Without shedding of blood there is no remission.” Lincoln’s God told him that the sin could be blotted out not by rational argument and compromise, but only by bloodshed.
As president, Lincoln may have started the Civil War believing that saving the Union and ending slavery were two distinct aims. By the time he issued the Emancipation Proclamation in 1862, he realized they were one and the same. Only then, as he stated in his Gettysburg Address, would America be ready for a new “birth of freedom” and to give a new meaning to the idea of democracy.20 It took the slaughter of Gettysburg, Atlanta, the Wilderness, and Spotsylvania to convince the rest of the nation that he had been right.
It also meant that the old way of framing intellectual and moral debates in America would have to change. The Civil War of 1861–65 shattered the certainties of Common Sense Realism almost as decisively as the Great War would shatter those of Victorian Europe. Of course, as in the European case, the doubts and counterthrusts had begun years before that. Hegel, Kant, and German Idealism had broken through the Common Sense crust as early as the 1840s. It would branch out with the American Transcendentalists like Ralph Waldo Emerson. It would grow into full blossom in the Harvard of Josiah Royce and George Santayana, just as Princeton served as the last bastion of Common Sense Realism under its Scottish-born president, James McCosh. Ernst Mach, Auguste Comte, even Karl Marx: all found American converts in the new post–Civil War industrial age.
It was clear some new reassessment of old principles was in desperate order, and the place it happened was in the thriving port city of Baltimore, at the brand-new university founded by a Quaker merchant named Johns Hopkins.
The life of its founder reflected many of the cross-currents of American culture, as well as the vibrancy of its business culture. Born in 1795 on a tobacco plantation, Hopkins was the son of Quakers who, in 1807, freed their slaves in accordance with Quaker doctrine and put their own eleven children, including twelve-year-old Johns, to work in the fields in their place.
Five years later Johns Hopkins left to join his uncle’s wholesale grocery business in Baltimore—just in time to witness the British siege of Fort McHenry during the war that same year. After the war, Hopkins struck out to found a dry goods business on his own with his three brothers. Hopkins and Brothers became dealers in the region, and Johns Hopkins became a very rich man as well as a director of the fledgling Baltimore and Ohio Railroad.
The Civil War found him—unlike most Marylanders—a firm supporter of Abraham Lincoln and the abolitionist cause: he even gave the Union Army use of the B&O for free. After the war he poured his fortune into various philanthropic projects, including a college in the District of Columbia for African-American women, an orphanage in Baltimore for black children, and a university in the same town that opened its doors three years after his death, in 1876.
Under its first president, Daniel Coit Gilman, the Johns Hopkins University was the first American academy founded to compete with its European counterparts in the breadth of its learning and depth of its cutting-edge research. Gilman recruited scientific giants such as mathematician James Sylvester and physicists Henry Rowland and Lord Kelvin, inventor of the famous temperature scale but also a major researcher in electromagnetism and atomic theory—the same frontiers James Clerk Maxwell and Ludwig Boltzmann were exploring on the other side of the Atlantic. Gilman hired famed philosophers George S. Morris and Stanley Hall; and the first Hopkins PhDs in philosophy would go to such future luminaries as Josiah Royce, Thorstein Veblen, and a rumpled, nearsighted youngster from Vermont named John Dewey.
Gilman always said the goal of a university should be “to make less misery for the poor, less ignorance in the schools, less suffering in the hospitals”—in 1893 he would create the Johns Hopkins Medical School, run by the legendary English physician Sir William Osler—“less fraud in business, and less folly in politics.”21 But his most significant contribution was hiring a shy man with a degree in chemistry from Harvard and a background in physics and astronomy who happened to be working for the U.S. Coast and Geodetic Survey, to teach the Hopkins students logic.
He was Charles Sanders Peirce, and together with another visitor to Hopkins who occasionally dropped in from Harvard to lecture there, William James (Gilman tried desperately to hire James full time, but Harvard refused to let him go), he would create America’s first homegrown philosophical creed—one, more than any other, that worked to translate the culture’s dynamic balance of Plato and Aristotle into a conscious way of understanding the world.
Although Peirce devoted himself to teaching logic, few people in America had better knowledge of the new trends in Western scientific thinking, from Darwin to Maxwell’s thermodynamics and Mach’s Positivism—as well as the mathematics of probability. That knowledge, however, made him
uneasy. It was the same unease that had stirred Henry More in the mid-1600s about the triumph of the new mechanical science, on the eve of Newton’s arrival at Cambridge.
In an impersonal world that has finally, definitively banished all final causes from nature and our lives—including, presumably God—what happens to the human factor? “The world … is evidently not governed by blind law,” Peirce would write, “its leading characteristics are absolutely irreconcilable with that view”—including how we lead our lives in accordance with the basic assumption of free will.22 Yet the triumph of Darwin and science, and the breakup of Common Sense Realism, had seemed to encourage people to think the opposite, and made them feel as minor cogs in the great impersonal machine of Nature.
A man said to the universe:
“Sir, I exist!”
“However,” replied the universe,
“The fact has not created in me
A sense of obligation.”
Stephen Crane
No wonder others were being drawn to the nihilistic pessimism already surfacing in the Europe of Friedrich Nietzsche (Birth of Tragedy appeared the year after Peirce published his first article in North American Review); in the deterministic materialism of Karl Marx (the first American edition of The Communist Manifesto appeared in 1871); and in the strange supernatural flights of the mystagogue Baron Swedenborg (one of them was William James’s father). Peirce would have argued that Hegel belonged in the same camp.
That disillusionment had already appeared in the works of Mark Twain. The author of Tom Sawyer and The Adventures of Huckleberry Finn felt a chronic anxiety about the direction the country was headed after the Civil War, a gloom that broke the surface in his late essays, titled What Is Man?: “There is no God, no universe, no human race, no earthly life, no heaven, no hell. It is all a dream—a grotesque and foolish dream. Nothing exists but you. And you are but … a useless thought, a homeless thought, wandering forlorn among the empty eternities.”23 Discovery of the law of entropy had led historian Henry Adams to conclude that the human race was stuck on a degenerative course that would leave not only civilization but the planet itself a cold, lifeless lump of matter by 2025, yet Adams himself was the grandson of a president, John Quincy Adams, and great-grandson of another, John Adams.24