Book Read Free

But What If We're Wrong?

Page 20

by Chuck Klosterman


  But then the Internet started to collect and index everything, including opinions and reviews and other subjective non-facts. This happened Hemingway-style: gradually (I wrote most of my first book in 1999 and the Internet was no help at all) and then suddenly (that book somehow had its own Wikipedia page by 2005). During the last half of the nineties, the Internet still felt highly segregated—to a mainstream consumer, it was hard to see the ideological relationship between limitless porn and fantasy football and Napster and the eradication of travel agents. What unified that diaspora was the rise of blogging, spawning what’s now recognized as the “voice” of the Internet. Yet that voice is only half the equation; the other half is the mentality that came along with it. The first successful groundswell of bloggers came from multiple social classes and multiple subcultures. As a collective, they were impossible to define. But they did have one undeniable thing in common: They were, almost by definition, early adopters of technology. They were into the Internet before most people cared what it was. And in most cases, this interest in early adoption was not restricted to computers. These were the kind of people who liked grunge music in 1989. These were the kind of people who subscribed to Ray Gun magazine and made a point of mentioning how they started watching Seinfeld when it was called The Seinfeld Chronicles. These were the kind of people who wore a Premier League jersey to the theatrical premiere of Donnie Darko. These are consumers who self-identify as being the first person to know about something (often for the sake of coolness, but just as often because that’s the shit they’re legitimately into). It’s integral to their sensibility. And the rippling ramifications of that sensibility are huge.

  For a time in the early 2000s, there was a belief that bloggers would become the next wave of authors, and many big-money blogger-to-author book deals were signed. Besides a handful of notable exceptions, this rarely worked, commercially or critically. The problem was not a lack of talent; the problem was that writing a blog and writing a book have almost no psychological relationship. They both involve a lot of typing, but that’s about as far as it goes. A sentence in a book is written a year before it’s published, with the express intent that it will still make sense twenty years later. A sentence on the Internet is designed to last one day, usually the same day it’s written. The next morning, it’s overwritten again (by multiple writers). The Internet experience is not even that similar to daily newspaper writing, because there’s no physical artifact to demarcate the significance of the original moment.67 Yet this limitation is not a failure. It proved to be an advantage. It naturally aligns with the early-adoption sensibility that informs everything else. Even when the Internet appears to be nostalgically churning through the cultural past, it’s still hunting for “old newness.” A familiar video clip from 1986 does not possess virility; what the medium desires is an obscure clip from 1985 that recontextualizes the familiar one. The result is a perpetual sense of now. It’s a continual merging of the past with the present, all jammed into the same fixed perspective. This makes it seem like our current, temporary views have always existed, and that what we believe today is what people have always believed. There is no longer any distance between what we used to think and what we currently think, because our evolving vision of reality does not extend beyond yesterday.

  And this, somewhat nonsensically, is how we might be right: All we need to do is convince ourselves we always were. And now there’s a machine that makes that easy.

  [4]“I am often wrong,” wrote satirist and critic H. L. Mencken, a statement that would seem more disarming were it not for the fact that Mencken so often opened his quotations by suggesting his forthcoming thoughts were worthless. “My prejudices are innumerable, and often idiotic. My aim is not to determine facts, but to function freely and pleasantly.”

  I get this. I understand what he’s getting at, and sometimes I relate to it: Since our interior thoughts are (ultimately) arbitrary and meaningless, we might as well think whatever we prefer thinking. This was especially important to a guy like Mencken, who was against US participation in World War II and hated Franklin Roosevelt. He was quite willing to concede that his most intensely held opinions weren’t based on factual data, so trying to determine what the factual data actually was would only make him depressed. It’s a worldview that—even if expressed as sarcasm—would be extremely unpopular today. But it’s quietly become the most natural way to think about everything, due to one sweeping technological evolution: We now have immediate access to all possible facts. Which is almost the same as having none at all.

  Back in the landlocked eighties, Dave Barry offhandedly wrote something pretty insightful about the nature of revisionism. He noted how—as a fifth-grader—he was told that the cause of the Civil War was slavery. Upon entering high school, he was told that the cause was not slavery, but economic factors. At college, he learned that it was not economic factors but acculturalized regionalism. But if Barry had gone to graduate school, the answer to what caused the Civil War would (once again) be slavery.68 Now, the Civil War is the most critical event in American history, and race is the defining conflict of this country. It still feels very much alive, so it’s not surprising that teachers and historians want to think about it on disparate micro and macro levels, even if the realest answer is the simplest answer. But the Internet allows us to do this with everything, regardless of a subject’s significance. It can happen so rapidly that there’s no sense the argument has even evolved, which generates an illusion of consistency.

  I’ve been writing this book during a period when many retired eighties-era pro wrestlers have died—the Ultimate Warrior, Dusty Rhodes, Rowdy Roddy Piper, etc. The outpouring of media recognition regarding these deaths has been significant. The obituaries frame these men as legends, and perhaps that’s how they deserve to be framed. But what’s been weird about this coverage is the unspoken viewpoint. Logically, it seems like a remembrance of Dusty Rhodes should include some version of the following: “We didn’t think this guy was important, but he was. Culturally, we were wrong about pro wrestling.” Because during the 1980s, almost no one thought pro wrestling mattered at all. Even the teenage males who loved it rarely took it seriously. But this is not how these remembrances were delivered. Instead, the unspoken viewpoint was of course these people were important, and of course we all accept and understand this, and of course there is nothing remotely strange about remembering Dusty Rhodes as a formative critic of Reagan-era capitalism. Somebody once believed this, which means it was possible for anyone to have believed this, which means everyone can retroactively adopt this view as what they’ve always understood to be true. No one was ever wrong about wrestling. We were always right about it. In 1976, Renata Adler wrote the experimental novel Speedboat. It went out of print. When it was re-released in 2013, Speedboat was consumed and adopted as “old newness” (“Millennials, Meet Renata Adler,” demanded a headline in The New Republic). In a span of two years, Adler completely reentered the critical dialogue, almost as if she had been there the whole time. The thirty-plus years this book was ignored no longer exist. Technologically, 1976 and 2013 exist in the same moment.

  There’s a common philosophical debate about the nature of time. One side of the debate argues that time is happening in a linear fashion. This is easy to understand. The other side argues that all time is happening at once. This is difficult to comprehend. But replace the word “time” with “history,” and that phenomenon can be visualized on the Internet. If we think about the trajectory of anything—art, science, sports, politics—not as a river but as an endless, shallow ocean, there is no place for collective wrongness. All feasible ideas and every possible narrative exist together, and each new societal generation can scoop out a bucket of whatever antecedent is necessary to support their contemporary conclusions. When explained in one sentence, that prospect seems a little terrible. But maybe that’s just because my view of reality is limited to river-based thinking.

  I’ve slow
ly become an admirer of Edward Snowden, the former government employee who leaked thousands of classified documents and now lives in exile. I was initially skeptical of Snowden, until I saw the documentary Citizenfour. Granted, Citizenfour is a non-objective telling of his story, produced by the journalists Snowden was aligned with. It could be classified as a propaganda film. But it’s impossible to watch Snowden speak without trusting the sincerity of his motives and the tenacity of his central argument. I believe Snowden more than I believe the government. He does, however, make one statement in Citizenfour that seems preposterous and wrong: While discussing the alleged greatness of the early (pre-surveillance) Internet, he notes that a child in one part of the world could have an anonymous discussion with a verified expert in another part of the world and “be granted the same respect for their ideas.” To me, that does not sound like a benefit. That sounds like a waste of time and energy, at least for the verified expert. The concept of some eleven-year-old in Poland facelessly debating Edward Witten on an equal platform, just because there’s a machine that makes this possible, seems about as reasonable as letting dogs vote. But I suppose that’s because I still can’t accept the possibility of Witten being totally wrong, no matter how hard I try. I mean, if we found records of an eleven-year-old girl from 340 BC who contacted Aristotle and told him his idea about a rock wanting to sit on the ground was irrational bullshit, we’d name a college after her.

  Only the Penitent Man Shall Pass

  A large group of people are eating and drinking. They’re together, but not really together. Some of the people know each other well and others are almost strangers; instead of one mass conversation, there are little pockets of conversations, sprinkled throughout the table. I am at this table. What I am talking about is unimportant, or—more accurately—will need to be classified as “unimportant,” as I will not be able to remember what it was when I awake in the morning. But it must be some topic where I’m expressing doubt over something assumed to be self-evident, or a subject where the least plausible scenario is the most interesting scenario, or a hypothetical crisis that’s dependent on the actualization of something insane. I say this because someone at the table (whom I’ve met only once before) eventually joins my semi-private conversation and says, “It must be terrifying to think the world is actually like that.”

  “What do you mean?” I ask. My memory of what she says next is sketchy, but it’s something along the lines of: It must be terrifying to view the world from the perspective that most people are wrong, and to think that every standard belief is a form of dogma, and to assume that reality is not real. Her analysis is delivered in a completely non-adversarial tone; it is polite, almost like she is authentically concerned for my overall well-being. My response is something like “Well, I don’t really think like that,” because I don’t think I think the way she thinks I think. But maybe I do. And I get what she’s driving at, and I realize that—from her vantage point—any sense of wide-scale skepticism about the seemingly obvious would be a terrifying way to live.

  There’s an accepted line of reasoning that keeps the average person from losing his or her mind. It’s an automatic mental reflex. The first part of the reasoning involves a soft acceptance of the impossible: our recognition that the specific future is unknowable and that certain questions about the universe will never be answered, perhaps because those answers do not exist. The second part involves a hard acceptance of limited truths: a concession that we can reliably agree on most statements that are technically unprovable, regardless of whether these statements are objective (“The US government did not plan the 9/11 attacks”), subjective (“Fyodor Dostoyevsky is a better novelist than Jacqueline Susann”), or idealistic (“Murder is worse than stealing, which is worse than lying, which is worse than sloth”). It’s a little like the way we’re biologically programmed to trust our friends and family more than we trust strangers, even if our own past experience suggests we should do otherwise. We can’t unconditionally trust the motives of people we don’t know, so we project a heightened sense of security upon those we do, even if common sense suggests we should do the opposite. If 90 percent of life is inscrutable, we need to embrace the 10 percent that seems forthright, lest we feel like life is a cruel, unmanageable joke. This is the root of naïve realism. It’s not so much an intellectual failing as an emotional sanctuary from existential despair.

  It is not, however, necessary.

  Is there a danger (or maybe a stupidity) in refusing to accept certain espoused truths are, in fact, straightforwardly true? Yes—if you take such thinking to the absolute extreme. It would be pretty idiotic if I never left my apartment building, based on the remote mathematical possibility that a Komodo dragon might be sitting in the lobby. If my new postman tells me his name is Toby, I don’t ask for state-issued identification. But I think there’s a greater detriment with our escalating progression toward the opposite extremity—the increasingly common ideology that assures people they’re right about what they believe. And note that I used the word “detriment.” I did not use the word “danger,” because I don’t think the notion of people living under the misguided premise that they’re right is often dangerous. Most day-to-day issues are minor, the passage of time will dictate who was right and who was wrong, and the future will sort out the past. It is, however, socially detrimental. It hijacks conversation and aborts ideas. It engenders a delusion of simplicity that benefits people with inflexible minds. It makes the experience of living in a society slightly worse than it should be.

  If you write a book about the possibility of collective wrongness in the present day, there are certain questions people ask you the moment you explain what you’re doing. Chief among these is, “Are you going to write about climate change?” Now, I elected not to do this, for multiple reasons. The main reason is that the Earth’s climate is changing, in a documented sense, and that there is exponentially more carbon in the atmosphere than at any time in man’s history, and that the rise of CO2 closely corresponds with the rise of global industrialization. Temperature readings and air measurements are not speculative issues. But the more insidious reason I chose not to do this is that I knew doing so would automatically nullify the possibility of writing about any non-polemic ideas even vaguely related to this topic. It would just become a partisan, allegorical battle over what it means to accept (or deny) the central concept of global warming. This is one of those issues where—at least in any public forum—there are only two sides: This is happening and it’s going to destroy us (and isn’t it crazy that some people still disagree with that), or this is not happening and there is nothing to worry about (and isn’t it crazy how people will just believe whatever they’re told). There is no intellectual room for the third rail, even if that rail is probably closer to what most people quietly assume: that this is happening, but we’re slightly overestimating—or dramatically underestimating—the real consequence. In other words, the climate of the Earth is changing, so life on Earth will change with it. Population centers will shift toward the poles. Instead of getting wheat from Kansas, it will come from Manitoba. The oceans will incrementally rise and engulf the southern tip of Manhattan, so people will incrementally migrate to Syracuse and Albany. The average yearly temperature of London (45 degrees Fahrenheit) might eventually approach the average yearly temperature of Cairo (70.5 degrees), but British society will find a way to subsist within those barren conditions. Or perhaps even the pessimists are too optimistic; perhaps it’s already too late, the damage is irrevocable, and humankind’s time is finite. The international community has spent the last two decades collectively fixated on reducing carbon emissions, but the percentage of carbon in the atmosphere still continues to increase. Maybe we’ve already entered the so-called Sixth Extinction and there is no way back. Maybe the only way to stop this from happening would be the immediate, wholesale elimination of all machines that produce carbon, which would equate to the immediate obliteration of all industry, which
would generate the same level of chaos we’re desperately trying to avoid. Maybe this is how humankind is supposed to end, and maybe the downside to our species’ unparalleled cerebral evolution is an ingrained propensity for self-destruction. If a problem is irreversible, is there still an ethical obligation to try to reverse it?

  Such a nihilistic question is hard and hopeless, but not without meaning. It needs to be asked. Yet in the modern culture of certitude, such ambivalence has no place in a public conversation. The third rail is the enemy of both poles. Accepting the existence of climate change while questioning its consequence is seen as both an unsophisticated denial of the scientific community and a blind acceptance of the non-scientific status quo. Nobody on either side wants to hear this, because this is something people really, really need to feel right about, often for reasons that have nothing to do with the weather.69

  [2]There’s a phrase I constantly notice on the Internet, particularly after my wife pointed out how incessant it has become. The phrase is, “You’re doing it wrong.” It started as a meme for photo captions but evolved into something different; it evolved into a journalistic device that immediately became a cliché. A headline about eyewear states, “Hey Contact Wearer, You’re Doing It Wrong!” A story about how many people are watching streaming TV shows gets titled “Netflix Ratings: You’re Doing It Wrong.” Newsweek runs a story with the headline “You’re 100 Percent Wrong About Showering.” Time opens a banking story about disgust over ATM fees by stating, “You’re doing it wrong: most Americans aren’t paying them at all.” These random examples all come from the same month, and none are individually egregious. It could be argued that this is simply an expository shortcut, and maybe you think I should appreciate this phrase, since it appears to recognize the possibility that some widely accepted assumption is being dutifully reconsidered. But that’s not really what’s happening here. Whenever you see something defining itself with the “You’re doing it wrong” conceit, it’s inevitably arguing for a different approach that is just as specific and limited. When you see the phrase “You’re doing it wrong,” the unwritten sentence that follows is: “And I’m doing it right.” Which has become the pervasive way to argue about just about everything, particularly in a Web culture where discourse is dominated by the reaction to (and the rejection of) other people’s ideas, as opposed to generating one’s own.

 

‹ Prev