The Master Switch
Page 19
Both breakups—that of AT&T and that of the studios—would generate significant controversy, at the time and later. For in each case, there would be those who saw the dismemberment as a senseless summary execution of a robust, if restrictive, industry. With Bell particularly, a case in which the Justice Department had deferred action until a combination of the monopoly’s arrogance and technological stagnation made it seem to many ludicrously overdue, there were nonetheless those who would regard—indeed, still do regard—the breakup as the crime of the century. In 1988 two Bell engineer-managers, Raymond Kraus and Al Duerig, would write a book called The Rape of Ma Bell, decrying how “the nation’s largest and most socially minded corporation was defiled and destroyed.” Barry Goldwater, the conservative icon and candidate for president, put it this way: “I fear that the breakup of AT&T is potentially the worst thing to happen to our national interests in telecommunications that will ever occur.”3
The critics have a point: a federal breakup is an act of aggression and arguably punishes success. In the short term, the consequences of the state’s interventions in both communications cases were ugly indeed. Each industry lapsed into an immediate period of chaos and experienced a drop in product quality. The decline of the film industry, which had been so grand and powerful in the 1930s and 1940s, would last into the 1970s. And in the immediate aftermath of the AT&T breakup, consumers saw a drop-off in service quality utterly unexampled since the formation of the Bell system. In fact, the “competitive” industries that replaced the imperial monopolies were often not as efficient or successful as their predecessors, failing to deliver even the fail-safe benefit of competition: lower prices.
Whether sanctioned by the state or not, monopolies represent a special kind of industrial concentration, with special consequences flowing from their dissolution. Often the useful results are delayed and unpredictable, while the negative outcomes are immediate and obvious. Deregulating air travel, for instance, implied a combination of greater choice, lower prices, and, alas, smaller seats, among other downgrades, as one might have more or less foreseen. The breakup of Paramount, by contrast, and the fall of the studio system ushered in something less expected: the collapse of the Production Code system of film censorship. While not the only factor transforming film in the 1960s and 1970s, the end of censorship certainly contributed to an astonishing period of experimentation and innovation. Likewise, the breakup of Bell laid the foundation for every important communications revolution since the 1980s onward. There was no way of knowing that thirty years on we would have an Internet, handheld computers, and social networking, but it is hard to imagine their coming when they did had the company that buried the answering machine remained intact.
The case for industry breakups comes from Thomas Jefferson’s idea that occasional revolutions are important to the health of any system. As he wrote in 1787, “a little rebellion now and then is a good thing, and as necessary in the political world as storms in the physical.… It is a medicine necessary for the sound health of government.”
Let us now evaluate the success of the government’s first breakup of an information empire. It is not a tale to rival the epic of AT&T’s breakup, which we take up in greater detail later. But it is the first crack in the ancien régime of state connivance with information industries and as such a fitting place to start.
THE STUDIOS
By the 1940s the Hollywood studio system had been perfected as a machine for producing, distributing, and exhibiting films at a guaranteed rate of return—if not on every film, on the product in the aggregate. Each of the five major studios had by then undergone full vertical integration, with its own production resources (including not just lots and cameras but actors, directors, and writers as human cogs as well), distribution system, and proprietary theaters. There was much to say about this setup in terms of efficiency, which was effectively an assembly line for film. Out of the factory came a steady supply of films of reliable quality; yet on the other hand, like any factory, the studios did not admit a lot of variety in their product. Henry Ford famously refused to issue his Model T car in any color but black, and while Hollywood didn’t go that far, there was a certain sameness, a certain homogeneity to the films produced in the 1930s through the 1950s. That homogeneity was buttressed by the ongoing precensorship under the Production Code, which ensured that films would not stray too far from delivering the “right” messages: marriage was good, divorce bad; police good, gangsters bad—leaving no room for, say, The Godfather, let alone its sequels.
The cornerstone of the studio system was the victory Zukor won over the large first-run theater in major cities and the ongoing block booking system. In America’s ninety-two largest cities, the studios owned more than 70 percent of them. And though these first-run movie palaces comprised less than 20 percent of all the country’s theaters, they accounted for most of the ticket revenue.4 As the writer Ernest Borneman put it in 1951, “control of first run theaters meant, in effect, control of the screen.”
The man inspired to challenge this system was Thurman Arnold, a Yale law professor turned trustbuster with some rather striking ideas about industrial concentration. Arnold, whose name continues to grace one of Washington, D.C.’s most prestigious law firms (Arnold & Porter), was by today’s standards an antitrust radical, a fundamentalist who believed the law should be enforced as written. In The Folklore of Capitalism (1937), Arnold compared the role of U.S. antitrust law to statutes concerning prostitution: he deemed that both existed more to flatter American moral vanity than to be enforced.5
His language may have been strong, but Arnold had a point. By the time he took over the antitrust department in the 1930s, the United States, once a nation of small businesses and farms, was dominated by monopolies and cartels in nearly every industry. As the economist Alfred Chandler famously described it, the American economy was now dominated by the “visible hand” of managerial capitalism.6 This despite the fact that the text of the Sherman Act, the main antitrust law, wasn’t (and isn’t) all that ambiguous. The law explicitly made monopolization and deals in restraint of trade illegal. A nonlawyer can understand this from reading sections one and two of the Act:*
Every contract, combination in the form of trust or otherwise, or conspiracy, in restraint of trade or commerce among the several States, or with foreign nations, is declared to be illegal.
Every person who shall monopolize, or attempt to monopolize … any part of the trade or commerce among the several States, or with foreign nations, shall be deemed guilty of a felony.
Arnold, as soon as he gained Senate confirmation, acted quickly to implement his literalist view of the antitrust laws. His aim was to bring quick, high-visibility lawsuits breaking up cartels in whose evils American citizens could easily understand. His first lawsuits were brought against the car industry (GM, Ford, and Chrysler, the “Big Three”); the American Medical Association, which he charged with preventing competition among health plans; and most relevant for us, the film industry. Arnold’s 1938 lawsuit against Hollywood charged twenty-eight separate violations of the Sherman Act and demanded that the film studios “divorce” their first-run theaters. And he repeatedly denounced the film industry as “distinctly un-American,” and characterized its structure as a “vertical cartel like the vertical cartels of Hitler’s Germany, Stalin’s Russia.”7
A decade would intervene, with various near-settlements and consent decrees, but the Antitrust Division finally achieved what Arnold wanted. In 1948, the United States Supreme Court agreed with the Justice Department’s petition that Hollywood was an illegal conspiracy in restraint of trade, whose proper remedy lay in uncoupling the studios from the theaters. The Court’s ruling by Justice William O. Douglas readily accepted Arnold’s contention that the first-run theaters were the key to the matter, and with that acceptance disappeared any hope the studios might prevail. The Court ruled that they had undeniably fixed prices and, beginning in 1919 with Zuckor’s Paramount, unfairly discrimina
ted against independent theaters by selling films in block. There were various other offenses, but that was enough. Over the next several years, every studio would be forced to sell off its theaters.8
For the new information industries of the twentieth century, the Paramount decision was the first experience they would have of the awesome power of the state. The government had induced a paroxysm of creative destruction, seizing an industry by the throat. The infractions were indisputable, but there was nevertheless a degree of arbitrariness in the exercise of state power. Was this, after all, not the same government that had encouraged and supported the broadcast networks and the Bell system in their hegemonic forms? It was indeed, but Thurman Arnold was a different head of the hydra. Stripped of their control over exhibition, the Hollywood studios lost their guaranteed audiences. The business as they knew it would have to be entirely rethought.
In the short term came the chaos of breakup without the economic efficiencies. Robert Crandall, an economist at the Brookings Institution and a critic of the antitrust laws, has argued that the Paramount decree, as it was known, failed to lower the price of theater tickets.* And while there may never be a good time to sustain such a body blow, the action came at an especially bad moment for the studios; the arrival of television and the rise of suburbs after the war cut sharply into film viewer-ship and revenues from the key urban markets. Still, in some sense the Paramount decree may have been just the bitter pill that the already listless studios needed: losing the first-run advantage would force them to reorganize and change the way films were made sooner rather than later. Institutional inertia being what it is, systems are rarely fixed unless they are broken, and this one, against its will, was broken utterly.9
Whatever its immediate consequences, the Paramount decision launched a transformation of American film as cultural institution, throwing the industry back into the open state it had emerged from in the 1920s. As Arnold had hoped, the independence of theaters cleared the way for independent producers, and even for foreign filmmakers, long excluded, to now sell to theaters directly. But the most profound effects of the decree would not emerge for decades. The industry would remain in an odd stasis through the 1950s and into the early 1960s. Eventually, though, as the mode of film production changed, returning to a decentralized style not seen since the 1910s and 1920s, so, too, did the product. After the decree, films were increasingly made one at a time rather than from a mold, according to the vision of a director or producer. “What replaced film production in the dismantled studios was a transformed system,” writes the economist Richard Caves, “with most inputs required to make a film coming together only in a one-shot deal.… the same ideal list of idiosyncratic talents rarely turns up for two different films.”10
With the fall of the studios, perhaps even more decisive than the transformation of production structure was the end of the censorship system. The power of the old Production Code written by Daniel Lord and enforced by Joseph Breen was effectively abrogated when the studios lost control over what the theaters showed.11
A very different type of production was feasible once theaters were free to buy unapproved films and ignore the regime that the studios had enforced in exchange for Breen’s blessing. Producers took the cue, creating darker, countercultural, and controversial works—everything the Code prohibited. The Code itself was still around, but it had lost its bite. In 1966, Jack Valenti, the new, forty-five-year-old head of the MPAA, decided he wanted to “junk it at the first opportune moment.” He noticed something obvious in retrospect: “There was about this stern, forbidding catalogue of ‘Do’s’ and ‘Don’ts’ the odious smell of censorship.”12
Valenti instituted the familiar ratings system (G, PG, R, X) in 1968, and far from marking a return to restraint, it was a license to make films patently unsuitable for children—even to the point of being what is euphemistically called “adult.” At the same time, the freedom to import European films had its own influence on American production. Seeing the popularity of foreign offerings—typically moodier, more cerebral, and erotically explicit—the desperate studios were forced to invest in a new kind of American film. The result is known by film historians as the New Hollywood era, among its emblematic products Bonnie and Clyde, Easy Rider, and Midnight Cowboy, all edgy, defiant affairs announcing a new day for the industry and the culture.*
So great was the range of experimentation in film in the 1970s that for a time, as surprising as it sounds now, X-rated films—that is, pornography—went through “normal” theatrical releases. The most famous example is 1972’s Deep Throat, which played in basically the same kind of theaters and made the same kind of money that a Hollywood blockbuster might today. Here was the medium as far as it could get from the days when the Production Code required preapproval of all films and obliged filmmakers, as a matter of course, to give audiences the “right” answers to all social questions.
Of course not every production of the period, which lasted until the early 1980s, would prove well made or enduring. Nevertheless the freedom to fail and sometimes to offend was extremely salutary for the medium in the era following the age of guaranteed success. What greatness did result came because directors and producers were allowed to experiment and probe the limits of what film could be. Whatever the merits of the individual outcome, the variety of ideas, in style and substance, was the widest it had been since before the 1934 Code.13
Antitrust action rarely takes the promotion of such variety and cultural innovation as one of its goals. The purpose of the statutes is to facilitate competition, not cultural or technological advancement (they were, after all, enacted under Congress’s constitutional authority over interstate commerce). Innovation in an expressive form isn’t ordinarily something one can patent, nor can creativity be satisfactorily quantified. But in considering whether government action was worthwhile, let us not, particularly where information and culture industries are concerned, fall into the trap of looking to results that only econometrics can reveal.
Films are not screwdrivers. As with all information industries, the merits of a breakup cannot be reduced to its effect on consumer prices, which may be slow to decline amid the inefficiencies and chaos of the immediate aftermath. But who would deny there are intangible costs to censorship? It is useful to consider whether Hollywood would be the peerless cultural export that it is were the industry not open to the full variety of sensibilities and ideas a pluralistic society has to offer.
* The argument that the text is ambiguous comes from the idea that the law would make so much illegal that it couldn’t possibly mean what it literally says.
* Of course, there is no knowing whether prices would have risen even higher were the industry still intact but operating under new market pressures.
* It is perhaps difficult to imagine that even without the antitrust action of the Roosevelt administration, Hollywood would not have evolved with the national mood in the 1960s. Changing sensibilities might well have upended the Code. But one shouldn’t underestimate the capacity of an entrenched industry to avoid the risk of innovation, the initial resistance to features providing perhaps the most stunning example in the history of film.
CHAPTER 12
The Radicalism of the Internet Revolution
In late April 1963, in the D Ring of the massive new building called the Pentagon, J.C.R. Licklider sat before a typewriter in his office, working on a memo. A member of the Defense Department’s Advanced Research Projects Agency (ARPA)—he wore the thick-rimmed black glasses popular among engineers of the era to prove it—Licklider addressed his memo to “Members and Affiliates of the Intergalactic Computer Network,” as a sort of joke. But in this message he sent around to the nation’s top computer networking scientists, Licklider argued very much in earnest that the time had come for a universal, or intergalactic, computer network: “It will possibly turn out, I realize, that only on rare occasions do most or all of the computers in the overall system operate together in an integrated networ
k. It seems to me to be interesting and important, nevertheless, to develop a capability for integrated network operation.”1
That may not sound terribly exciting. “We would have at least four large computers,” he continued, “and a great assortment of disc files and magnetic tape units—not to mention the remote consoles and teletype stations—all churning away.” A collection of hardware churning away; but to what end? Actually, the “intergalactic memo” was the seed of what we now call the Internet.
We may exaggerate the degree to which an invention can tend to resemble the inventor, just as a pet can resemble its master, but “scattered” and “quirky” are terms equally befitting both the Internet and Licklider, one of the first to envision it. He carried with him everywhere a can of Coca-Cola, his trademark. “He was the sort of man who would chuckle,” said his colleague, Leo Beranek, “even though he had said something quite ordinary.” We met both men, readers will recall, during the Hush-A-Phone affair.
Born in St. Louis in 1915, Licklider undertook a protracted curriculum at Washington University, emerging with undergraduate degrees in psychology, mathematics, and physics. He made his name in acoustics, which he studied as a psychologist, aiming to understand how sound reached the human mind. After earning his Ph.D. at the University of Rochester, he taught at MIT in the late 1950s, where, at Lincoln Laboratories, he first used a computer and got hooked.2