These Truths

Home > Other > These Truths > Page 87
These Truths Page 87

by Jill Lepore


  He posted this statement on the web, where it became one of the very first posts to spread, as was said, like a virus, an infection.

  Cyberutopians who had no use for government ignored the altogether inconvenient fact that of course not only the Internet itself but also nearly all the tools used to navigate it, along with the elemental inventions of the digital age, had been built or subsidized by taxpayer-funded, government-sponsored research. The iPhone, taking only one example, depended on the unquestionable and extraordinary ingenuity of Apple, but it also depended on U.S. government–funded research that had earlier resulted in several key technological developments, including GPS, multi-touch screens, LCD displays, lithium-ion batteries, and cellular networks. Nevertheless, Barlow and his followers believed that the Internet existed entirely outside of government, as if it had sprung up, mirabile dictu, out of nowhere, before and outside of civil society and the rule of law, in the borderless psychedelic fantasy world of cyberspace. “I ask you of the past to leave us alone,” Barlow pleaded. But if the futurists were uninterested in the past, they seemed also strangely incautious about the future. With rare exception, early Internet boosters, who fought for deregulation and antitrust measures even as they benefited from the munificence of the federal government, evidenced little concern about the possible consequences of those measures on income inequality and political division in the United States and around the world.49

  The Internet, a bottomless sea of information and ideas, had profound effects on the diffusion of knowledge, and especially on its speed and reach, both of which were accelerated by smartphones. If not so significant to human history as the taming of fire, it was at least as significant as the invention of the printing press. It accelerated scholarship, science, medicine, and education; it aided commerce and business. But in its first two decades, its unintended economic and political consequences were often dire. Stability, in American politics, had depended not on the wealth of the few but on the comfort of the many, not on affluence but on security, and a commitment to the notion of a commonwealth. The Internet did not destroy the American middle class, but it did play a role in its decline. It fueled economic growth and generated vast fortunes for a tiny clutch of people at a time when the poor were becoming poorer and the middle class disappearing. It turned out that the antimonopoly regulations of the industrial era, far from being obsolete, were sorely needed in the information age. And the vaunted promise of Internet connection, the gauzy fantasy of libertarians and anarchists who imagined a world without government, produced nothing so much as a world disconnected and distraught.

  Silicon Valley, as it grew, earned a reputation as a liberal enclave, but it also drew a younger generation of libertarians, who had come not from the counterculture but from the New Right. Peter Thiel, born in Germany in 1967, had gone to Stanford, and then to Stanford Law School, where in 1987 he had founded the Stanford Review with funding from Irving Kristol. It aimed to counter campus multiculturalism, feminism, and political correctness, whose rise at Stanford Thiel had lamented in The Diversity Myth, a 1990s update and dilution of God and Man at Yale. George Gilder and Robert Bork were among Thiel’s heroes. (Bork’s writings on the error of antitrust laws informed much Silicon Valley libertarianism.) After a brief career as a lawyer and a stock trader, Thiel had returned to California in 1996, just in time for the dot-com boom, which followed the lifting of restrictions on commercial traffic on the Internet. Ten thousand websites were launched every day, poppies in a field. In 1996, Bob Dole, an unlikely but bold pioneer, became the first presidential candidate to have a website. Amazon was founded in 1994, Yahoo! in 1995, Google in 1998. In 1998, Thiel co founded PayPal, hoping that it would free the citizens of the world from government-managed currency. “PayPal will give citizens worldwide more direct control over their currencies than they ever had before,” he promised.50

  The Silicon Valley entrepreneur—almost always a man—became the unrivaled hero of the Second Gilded Age. He was a rescued man, the male breadwinner defended by George Gilder in the 1970s against the forces of feminism, saved, and newly seen as saving the nation itself. Multibillion-dollar Internet deals were made every day. In four years, the value of dot-coms, many of which had not earned a profit, rose by as much as 3,000 percent. By 1999, Bill Gates, at forty-three, had become the richest man in the world, and Microsoft the first corporation in history valued at more than half a trillion dollars.51

  Inventors from Benjamin Franklin to Thomas Edison had been called “men of progress.” Silicon Valley had “disruptive innovators.” The language was laden, freighted with the weight of centuries. Historically, the idea of innovation has been opposed to the idea of progress. From the Reformation through the Enlightenment, progress, even in its secular usage, connoted moral improvement, a journey from sin to salvation, from error to truth. Innovation, on the other hand, meant imprudent and rash change. Eighteenth-century conservatives had called Jacobinism “an innovation in politics,” Edmund Burke had derided the French Revolution as a “revolt of innovation,” and Federalists, opposing Jefferson, had declared themselves to be “enemies to innovation.”52 Over the nineteenth century, the meaning of progress narrowed, coming, more often, to mean merely technological improvement. In the twentieth century, innovation began to replace progress, when used in this sense, but it also meant something different, and more strictly commercial. In 1939 the economist Joseph Schumpeter, in a landmark study of business cycles, used “innovation” to mean bringing new products to market, a usage that spread only slowly, and only in the specialized scholarly literatures of economics and business. In 1942, Schumpeter theorized about “creative destruction,” language that, after Hiroshima, had virtually no appeal.53 Progress, too, accreted critics; in the age of the atom bomb, the idea of progress seemed, to many people, obscene: salvation had not, in fact, been found in machines; to the contrary. Innovation gradually emerged as an all-purpose replacement, progress without goodness. Innovation might make the world a better place, or it might not; the point was, innovation was not concerned with goodness; it was concerned with novelty, speed, and profit.

  “Disruption” entered the argot in the 1990s. To disrupt something is to take it apart. The chief proselytizer of “disruptive innovation” (a rebranding of “creative destruction”) was Clayton M. Christensen, a professor at Harvard Business School. In 1997, Christensen published The Innovator’s Dilemma, a business bible for entrepreneurs, in which he argued that companies that make only “sustaining innovations” (careful, small, gradual refinements) are often overrun by companies that make “disruptive innovations”: big changes that allow them to produce a cheaper, poorer-quality product for a much larger market. IBM made sustaining innovations in its mainframe computers, a big, expensive product marketed to big businesses; Apple, selling a personal computer that ordinary people could afford, made a disruptive innovation.54

  After 9/11, disruptive innovation, a theory that rested on weak empirical evidence, became gospel, a system of belief, a way of reckoning with uncertainty in an age of rapid change, an age of terror. Terrorism was itself a kind of disruptive innovation, cheaper and faster than conventional war. The gospel of disruptive innovation applauded recklessness and heedlessness. Mark Zuckerberg founded Facebook in 2004, when he was not yet twenty, partly with funding from Thiel. “Unless you are breaking stuff, you aren’t moving fast enough,” he said, embracing the heedlessness of disruptive innovation. “Don’t be evil” was Google’s motto, though how to steer clear of iniquity appears to have been left to market forces. Companies and whole industries that failed were meant to fail; disruptive innovation aligned itself with social Darwinism. Above all, the government was to play no role in restraining corporate behavior: that had been a solution for the industrial age, and this was an age of knowledge.55

  One of the first casualties of disruptive innovation, from the vantage of American democracy, was the paper newspaper, which had supplied the electorate with information about
politics and the world and a sense of political community since before the American Revolution. “Printers are educated in the Belief, that when Men differ in Opinion,” Benjamin Franklin had once written, “both Sides ought equally to have the Advantage of being heard by the Publick; and that when Truth and Error have fair Play, the former is always an overmatch for the latter.”56 There had been great newspapers and there had been lousy newspapers. But the Republic had never known a time without newspapers, and it was by no means clear that the Republic could survive without them, or at least without the freedom of the press on which they were established, the floor on which civil society stands. Nevertheless, neither that history nor that freedom—nor any manner of editorial judgment whatsoever—informed decisions made by the disruptive innovators who declared the newspaper dead.

  The deregulation of the communications industry had allowed for massive mergers: General Electric bought RCA and NBC; Time merged with Warner, and then with AOL. Newspapers housed within this giant corporation became less accountable to their readers than to stockholders. (The New York Times, the Washington Post, and National Public Radio were among a handful of exceptions.) Fast-growing dot-coms had been a chief source of newspaper advertising revenue; during the dot-com bust, those companies either slashed their advertising budgets or eliminated them; they also turned to advertising online instead. Readers found that they could get their news without paying for it, from news aggregators that took reported stories from the newspapers and reprinted them. Papers began laying off a generation of experienced editors and reporters, then whole bureaus, and then the papers began closing their doors.57

  “The Internet is the most democratizing innovation we’ve ever seen,” Democratic presidential candidate Howard Dean’s campaign manager said in 2004, “more so even than the printing press.” At the time, many journalists agreed. Tom Brokaw talked about the “democratization of news,” and conservative journalists, in particular, celebrated the shattering of the “power of elites” to determine what is news and what is not.58

  Compared to newspapers and broadcast television news, the information available on the Internet was breathtakingly vast and thrilling; it was also uneven, unreliable, and, except in certain cases, unrestrained by standards of reporting, editing, and fact-checking. The Internet didn’t leave seekers of news “free.” It left them bruatally constrained. It accelerated the transmission of information, but the selection of that information—the engine that searched for it—was controlled by the biggest unregulated monopoly in the history of American business. Google went public in 2004. By 2016, it controlled nearly 90 percent of the market.59

  The Internet transformed the public sphere, blurring the line between what political scientists had for decades called the “political elite” and the “mass public,” but it did not democratize politics. Instead, the Internet hastened political changes that were already under way. A model of citizenship that involved debate and deliberation had long since yielded to a model of citizenship that involved consumption and persuasion. With the Internet, that model yielded to a model of citizenship driven by the hyperindividualism of blogging, posting, and tweeting, artifacts of a new culture of narcissism, and by the hyperaggregation of the analysis of data, tools of a new authoritarianism. Data collected online allowed websites and search engines and eventually social media companies to profile “users” and—acting as companies selling products rather than as news organizations concerned with the public interest—to feed them only the news and views with which they agreed, and then to radicalize them. Public opinion polling by telephone was replaced by the collection and analysis of data. Social media, beginning with Facebook, moving fast and breaking things, exacerbated the political isolation of ordinary Americans while strengthening polarization on both the left and the right, automating identity politics, and contributing, at the same time, to a distant, vague, and impotent model of political engagement.60 In a wireless world, the mystic chords of memory, the ties to timeless truths that held the nation together, faded to ethereal invisibility.

  “OUR WAR ON TERROR begins with al Qaeda, but it does not end there,” Bush said when he addressed Congress and a shaken nation on September 20, 2001. “It will not end until every terrorist group of global reach has been found, stopped, and defeated.” Bush pledged to destroy not only the perpetrators of the attacks on 9/11 but terrorism itself. This was not merely the saber rattling of a moment. By 2006, the stated objective of the National Security Strategy of the United States was to “end tyranny.” Like a war on poverty, a war on crime, and a war on drugs, a war on terror could imagine no end.61

  Terrorism respected no borders and recognized no laws. Fighting it risked doing the same. In 1980, twenty-three-year-old Osama bin Laden had joined a resistance movement against the Soviet occupation of Afghanistan, supplying funds and building a network of supporters. In 1988, when the mujahideen triumphed and the Soviet Union agreed to withdraw from Afghanistan, bin Laden formed al Qaeda as a base for future jihads, or holy wars. Bin Laden was not a cleric and did not in any way speak for the religion of Islam. But he did describe his movement in religious terms, as a form of political incitement. At a time of economic decline, political unrest, and violent sectarianism throughout the Arab world, he called for a jihad against Americans, whom he described as a godless, materialist people. Bin Laden argued that Americans had defiled the Islamic world and undermined the Muslim faith by causing wars between Muslims in Europe, Asia, Africa, and the Middle East. “It is saddening to tell you that you are the worst civilization witnessed by the history of mankind,” he wrote in a letter to America. In 1990, he urged the Saudi monarchy to support a jihad to retake Kuwait after the Americans ousted Saddam Hussein; instead, the Saudis welcomed U.S. forces into Saudi Arabia. Bin Laden denounced the American “occupation” and recruited and trained forces for terrorist acts that included suicide bombings. The CIA formed a special unit to work against al Qaeda and bin Laden in 1996, by which time bin Laden had declared war on the United States and found refuge with the Taliban, radical Islamic fundamentalists who had taken over Afghanistan and remade it as a religious state. In 1998, bin Laden called for a fatwa against all Americans, describing the murder of Americans as the “individual duty for every Muslim who can do it in any country,” in the name of a “World Islamic Front.”62

  After 9/11, the Bush administration demanded that the Taliban hand over bin Laden. The Taliban refused. On October 7, 2001, the United States began a war in Afghanistan. The immediate end of the war, aided by coalition partners, was to defeat al Qaeda; its more distant aim was to replace the Taliban with a democratically elected, pro-Western government.63 It became the longest war in American history.

  The Bush administration conceived of the war on terror as an opportunity to strike against hostile regimes all over the world, on the grounds that they harbored and funded terrorists. Between 1998 and 2011, military spending nearly doubled, reaching more than $700 billion a year—more, in adjusted dollars, than at any time since the Allies were fighting the Axis. In his 2002 State of the Union address, Bush described Iraq, Iran, and North Korea as another axis. “States like these, and their terrorist allies, constitute an axis of evil, arming to threaten the peace of the world,” he said. “By seeking weapons of mass destruction, these regimes pose a grave and growing danger. They could provide these arms to terrorists, giving them the means to match their hatred.” For all his fierce rhetoric, Bush took great pains and care not to denounce Islam itself, steering clear of inciting still more hatred. “All Americans must recognize that the face of terror is not the true face of Islam,” he said later that year. “Islam is a faith that brings comfort to a billion people around the world. It’s a faith that has made brothers and sisters of every race. It’s a faith based upon love, not hate.”64

  The Bush administration soon opened a second front in the war on terror. In 2003, another U.S.-led coalition invaded Iraq, with the aim of eradicating both Saddam Hussein and his weap
ons of mass destruction. The architects of this war were neoconservatives who regretted what they saw as George H. W. Bush’s premature withdrawal from the Middle East, his failing to occupy Iraq and topple Hussein after pushing him out of Kuwait. With few exceptions, Democrats and Republicans alike supported the wars in Afghanistan and Iraq, but support for the Iraq war was, from the start, more limited, and dwindled further after it became clear that Hussein in fact had no weapons of mass destruction. “In 2003, the United States invaded a country that did not threaten us, did not attack us, and did not want war with us, to disarm it of weapons we have since discovered it did not have,” wrote Pat Buchanan, placing the blame for the war on the neocons’ hijacking of the conservative movement, whose influence he greatly regretted. He complained, “Neoconservatives captured the foundations, think tanks, and opinion journals of the Right and were allowed to redefine conservatism.”65

  The war on terror differed from every earlier American war. It was led, from Washington, by men and women who had never served in the military, and it was fought, in the Middle East, by an all-volunteer force whose sacrifices American civilians did not share or know or even, finally, consider. In both Afghanistan and Iraq, the United States’ regime-building efforts failed. Vietnam had been a bad war, and a distant war, and its sacrifices had been unevenly borne, but they had been shared—and protested. Far distant from the United States, in parts of the world that few Americans had ever visited, the wars in Afghanistan and Iraq were fought by a tiny slice of the American population; between 2001 and 2011, less than one-half of 1 percent of Americans saw active duty. Hardly any members of Congress had ever seen combat, or had family members who had. “God help this country when someone sits in this chair who doesn’t know the military as well as I do,” Eisenhower once said. George H. W. Bush was the last president of the United States to have served in the U.S. military, to fear and loathe war because of knowing war.66 His successors lacked that knowledge. During the Vietnam War, George W. Bush had avoided combat by serving in the Texas Air National Guard. Bill Clinton and Donald Trump had dodged the draft. Obama came of age after that war was over. None of these men had sons or daughters who served in the military.67

 

‹ Prev