The Idealists

Home > Other > The Idealists > Page 10
The Idealists Page 10

by Justin Peters


  The arguments that were made decades before about piano rolls recurred in this new context, with some lobbyists for educational organizations claiming that authors and publications benefited from having their articles photocopied and distributed, insofar as such activity might actually persuade the duplicators to subscribe to the journals in question. The Chicago Tribune opined, “The school mimeograph should be viewed not as a piratical rival to the trade publisher but as a helpful unpaid publicity agent who helps publishers’ long-term sales.”59

  That argument didn’t take root. The rise of photoduplication technologies that facilitated the rapid spread of information merely underscored the fragility of copyright holders’ claims that intellectual property was indistinguishable from regular physical property. “We know that volumes of information can be stored on microfilm and magnetic tape. We keep hearing about information-retrieval networks,” former senator Kenneth B. Keating told Congress in 1965. “The inexorable question arises—what will happen in the long run if authors’ income is cut down and down by increasing free uses by photocopy and information storage and retrieval? Will the authors continue writing? Will the publishers continue publishing if their markets are diluted, eroded, and eventually, the profit motive and incentive completely destroyed? To pose this question is to answer it.”60

  * * *

  IN her interesting book, Who Owns Academic Work?, Corynne McSherry made a distinction between a fact and an artifact.61 A fact is one of “nature’s creations,” an ownerless piece of knowledge or data that belongs to the public. The periodic table, for example, is a fact: the tabular relationship between the chemical elements belongs to no one person, but instead is common knowledge on which all are free to draw and build. An artifact is a proprietary derivative of a fact: a specific and unique expression of fact or fancy. A colorful, uniquely designed poster of the periodic table, then, is an artifact, and the poster’s designer owns the right to sell copies of that poster and prevent you from reprinting it without permission. Facts are in the ether and cannot be deaccessioned—they belong to everyone. Artifacts are most often found in museums, cordoned behind velvet ropes, with direct access restricted to authorized personnel.

  Another useful concept when attempting to understand the various points of view on copyright and intellectual property issues is the difference between gift economies and market economies. In a gift economy, a person gains influence and stature by the skill with which he gives and receives gifts; the gift, which is freely given with no expectation of recompense or tangible reward, is the primary medium of exchange.

  The university has traditionally been a gift economy. Research is produced there and disseminated to the world; the academic gift-giver benefits when his theories and discoveries gain acceptance and adoption. Others then use his work to make new discoveries, from which the initial gift-giver in turn derives benefit. Nonacademic work, however, exists in a market economy, where cultural artifacts are bought and sold, and the creator benefits by reaping financial reward from the sale of his work. The creator is not giving gifts to the public: he is, rather, selling goods to the public, and he needs the money because, unlike his professorial counterparts, his salary is generally not underwritten by a large, benevolent institution.

  Libraries and universities pose an apparent threat to copyright guardians because they ostensibly represent a gift economy; they exist to disseminate information for the betterment of the commonweal, rather than to control the flow of information for the benefit of creators. The concept of intellectual property is a cornerstone of the market economy. Its adherents view with suspicion anyone who fails to share their free-market framework for cultural creation, and thus easily classify dissenters as pirates or thieves.

  The goal of all copyright laws is to restrict cultural gift-giving: to ease the transformation of facts into artifacts, to give creators a vast supply of velvet rope with which to regulate access to their creations. This impulse is not wholly negative; for one thing, artifacts can popularize facts. But the ultimate text of the law that became the Copyright Act of 1976 removed any residual doubt about exactly whose interests copyright was designed to protect. “In virtually all the contested points in the bill it seems that in the final revision all were reconciled in behalf of copyright owners. Is that not essentially true?” Representative Robert W. Kastenmeier observed when the bill was first presented to Congress in 1965. Eleven years later, his observation was still valid: the final bill was tailored to give creators and their corporate allies every possible opportunity to control and profit from their creations. The law expanded the standard copyright term from twenty-eight years, renewable once, to fifty years after the death of the author. It explicitly granted copyright protection to a range of new works. In fact, it guaranteed that any form of creative expression that originated in the United States would enjoy copyright protection from the moment of its creation, whether or not it had officially been registered.

  The concurrent development of the Copyright Act of 1976 and the automated library illustrates two opposing views on new technologies, intellectual property, and the implications of the former for the latter. The same technologies that inspire some people to dream of a better world can inspire apocalyptic visions in others. Meeting the challenge to greatness depends entirely on how “greatness” is defined—and on who is doing the defining.

  In 1965, with the Intrex- and UNIVAC-powered futures apparently imminent, an information scientist named Watson Davis spoke to the National Microfilm Association about something he called “the universal brain”: a mechanized library that promised to store and organize the sum of human knowledge. The world already had the technology to build the universal brain, Davis enthused; social friction seemed the primary impediment to its construction. “Is cooperation between librarians and organizations so difficult that ‘one big library’ can not be accomplished?” Davis asked his audience. “A little organization, cheerful argument, and gentle pressure from users and financial supporters such as government and foundations would make it possible. The know-how exists and we only need the let’s-do.”62

  The let’s-do has always been in short supply. If computing technology is indeed the centerpiece of seductive dreams—a license for optimists to imagine idealized futures—then one of the most persistent fantasies is the cheerful abdication of analog attitudes in favor of some impending digital utopia. In fact, technologies that promise to solve the world’s problems often end up entrenching them further. In the 1960s and 1970s, the world stood on the verge of a new informational era. But its advent portended the demise of the old one—and an incumbent rarely welcomes his challenger.

  As for Project Intrex, it faced a dearth of let’s-do and, ultimately, never came together. “The project seemed to be a bottomless financial pit,” wrote the historian Colin Burke, perhaps the world’s foremost Intrex authority.63 Burke also noted that the “technology was not ready to provide the high-powered information engine their goals demanded. Those that put too much faith in rapid technological advances, like Intrex, had to spend too much time waiting for the technology to appear.” The project collapsed by 1972, with little to show for itself except “two special ‘combined’ terminals that had already become somewhat outdated and, perhaps, some students and staff who were motivated to continue the search for the automated library.”64 As Intrex disintegrated and the Copyright Act of 1976 materialized, the organizers of both were only dimly aware that they would soon have to deal with the biggest library of them all: the Internet.

  4

  THE INFINITE LIBRARIAN

  In the waning months of 1970, an obstreperous drifter named Michael Hart returned to his hometown for good.1 Like many of his generation, the twenty-three-year-old Hart wanted to do something meaningful with his life; like many of his generation, he wasn’t exactly sure what that something would be. “People are shortsighted + shallow + world is suffering as a result,” he scribbled on a sheet of paper. “A. Don’t know if anything ca
n be done about it. B. Find out.”2

  Since leaving the Army in 1968 he had chased those answers, with little to show for his efforts.3 He spent months wandering America’s interstate highways as a cross-country hitchhiker, but the road brought him no great epiphanies.4 For a while he had attempted to effect social change as a singer of didactic folk songs. But “the World at large wasn’t listening,” Hart later wrote, “and I finally decided that if Dylan, Christ, and Simon & Garfunkel couldn’t have the effect I wanted then neither would I be able to do it in that manner.”5

  You can say this for Michael Hart, at least: he didn’t expect to succeed where Jesus Christ had failed. But that was virtually the only concession he was willing to make to his own limitations. Since childhood, Hart had been convinced of his unique greatness. “I wonder at those times whether he thinks of himself as some kind of superman who is above and beyond the level of the rest of us poor mortals,” his mother wrote to family friends when Michael was nineteen.6 As an Eagle Scout candidate in 1964, the young Hart had surprised the members of his review board by warning them not to mistake him for an ordinary civic-minded teenager. “I told them,” he recalled, that “I was revolutionary.”7

  And Hart was indeed revolutionary, if you loosely define revolutionary as “disruptive.” Though these sorts of things aren’t really tracked by any official body, Hart was likely among the least compliant teenagers in America during the 1960s. As a boy, on his first day at a new school in central Illinois, Hart announced his arrival by threatening to sue the administration over its dress code.8 Twice he enrolled at the University of Illinois at Urbana-Champaign; twice he withdrew in pique, enraged at perceived slights from instructors who were unable or unwilling to deal with his temperamental genius.9 (“My Integrity was being lost to the System,” he wrote later, by way of explanation.)10 Drafted into the military in 1966, he immediately antagonized his commanders with a refusal to swear allegiance and faith to the Constitution.11 “I can beat the Army,” he wrote to his family.12 He couldn’t—and he didn’t—but he kept on trying, to the extent that his eventual discharge from active service in 1968 surely prompted several champagne toasts somewhere in the bowels of the Pentagon.

  It wasn’t that Hart was a peacenik, not particularly. (Though he sought conscientious objector status, his mother suspected that this tactic may have been a ploy to get out of the Army.)13 It was more that Hart couldn’t stand being made to march. “I just wasn’t brought up to blindly obey anyone, particularly a stranger, no matter how authoritarian,” Hart observed.14 Professors, administrators, superior officers—Hart bristled at anyone who demanded intellectual deference, anyone who used “because I said so” as a rhetorical trump. “I did not understand that a person could become attached to a method of doing things, would hold on to that method, fight for it, try to destroy any other in an effort to avoid having to learn something new,” Hart later wrote.15 He defined himself by his willingness to reach his own conclusions, and to defend them in the face of universal disagreement. His personal papers include the transcribed lyrics to two thematically relevant contemporary pop songs: “Driftin’ ” and “Different Drum.”16

  Hart traveled to his own beat as a matter of principle. A basic problem with society, in Hart’s estimation, was that too many of its citizens were stuck in their own silos, accepting received wisdom as dogma, unwilling or unable to think for themselves. Hart hadn’t had much success convincing others to open their minds, however, and he wondered if his window of opportunity was closing. “At the end of the 60’s the Age of Apathy began and I started losing interest in my audiences because ther [sic] were losing interest in everything,” he wrote, explaining why he had abandoned his musical ambitions.17 So he returned to Urbana-Champaign in the winter of 1970, in search of a third shot at a college degree. There he learned that the instrument with which he would effect change wasn’t the guitar after all. It was the keyboard.

  * * *

  HART matriculated (again) at the University of Illinois at Urbana-Champaign in 1971, determined to get good grades “in order to shove it up the ass of society,” as if the most subversive thing he could do, as a man from whom nothing was expected, was to show that he was the most capable person around.18 The plan made sense to Hart, at least, and for perhaps the first and last time in his life he decided to succeed on someone else’s terms. He enrolled in a program called Individual Plans of Study, which allowed motivated students to design their own majors and chase their own interests, wherever those interests might lead them.19 He ended up at the U of I’s Materials Research Lab.

  At the time, the lab housed a large Xerox Sigma V mainframe computer, a $300,000 leviathan that barely resembled modern-day machines.20 The room-size behemoth was covered in switches and buttons and lights; it screamed Science! and, indeed, like other computers of the era, the Xerox Sigma V was intended for use by science and engineering departments. The very word computer, first used in 1613, originally referred to specific individuals tasked with performing calculations.21 A similarly narrow arithmetical lens informed the view of contemporary computers in the mid–Space Age. “We are still not thinking of the computer as anything but a myriad of clerks or assistants in one convenient console,” wrote Columbia University professor Louis T. Milic in the inaugural issue of Computers and the Humanities in 1966. “We do not yet understand the true nature of the computer. And we have not yet begun to think in ways appropriate to the nature of this machine.”22

  Five years later, Milic’s observation remained valid. To the materials scientists of the University of Illinois in 1971, it perhaps wasn’t obvious that the monolithic Sigma V had a nature, much less one that was worth learning. They were content to entrust its daily operations to several young system programmers, who acted as data chauffeurs, guiding the computer to the researchers’ desired destinations while ensuring it didn’t break down or catch fire. When not running programs for lab personnel, the system operators enjoyed free rein on the machine and were encouraged to spend that unsupervised time exploring its capabilities.

  Hart befriended two of the system programmers and started hanging around the lab, which was cool and quiet and conducive to study. On July 4, 1971, rather than brave the summer heat for a long walk home, Hart decided to spend the night in the mainframe room, so he ran out for groceries and settled in for history’s nerdiest slumber party.23 The system programmer on duty was a friend of Hart’s brother, and, for some reason, he decided that night to give Hart his very own account on the Xerox Sigma V—which, Hart later claimed, may have made him the first private citizen on the Internet.24

  Hart realized that he had been given great power, and this power made him hungry. Reaching into his grocery bag for some food, he noticed that a checkout clerk had included, free of charge, a pamphlet containing reproductions of various patriotic documents.25 The insert began with the Declaration of Independence—America’s foundational text, the document that announced a revolution so significant that the entire world took notice. For a young man keen on kindling an epochal revolution of his own, the symbolism was surely too obvious to miss.

  In that moment, Michael Hart saw the future. Not two hours after obtaining an account on the Xerox Sigma V, Hart later bragged, he had divined the machine’s core function. He announced that “the greatest value created by computers would not be computing, but would be the storage, retrieval, and searching of what was stored in our libraries.”26 And he quickly realized that someone would have to stock that library’s digital shelves.

  So, on that Independence Day, Hart resolved to transcribe the Declaration of Independence and put it onto the Xerox Sigma V for any and all to read. Using a Teletype terminal,27 Hart typed it up in capital letters—computers did not yet support lowercase text28—saved the document on a hard-drive pack, and informed the other network users that the Declaration of Independence was now available in computerized format.29 It was the first e-book.

  Years later, Hart would burnish this act into
legend—the genesis of a movement that would eventually spread across the world, one that would “undoubtedly become the greatest advance to human civilization and society since the invention of writing itself.”30 At the time, though, it seemed less like an opening salvo than a misfire; just another unnoticed folk song. (According to Hart, the Declaration was accessed only six times.)31 The upload had its most profound effect on the uploader himself, Michael Hart, who was convinced that he had hit on something big, even if, or perhaps because, no one else shared his optimism.

  The story of the modern free culture movement essentially begins here, in the early days of digital computing, on the margins of mainstream consciousness; its first protagonists unsupervised misfits such as Michael Hart who accomplished all they did simply because there was nobody around to stop them. Like many of his type, Hart believed that open information was intrinsically good, while rarely bothering to specify just how and why open information would change the world. To him, the progression seemed obvious. You couldn’t change the world without changing minds, and you couldn’t transform people’s thought processes without giving them something new to think about. The powerful wanted to keep the masses ignorant and pliable by making information expensive and scarce. Hart concluded that digital networks could be used to set that data free.

  Over the next forty years—intermittently at first, then consistently after the introduction of the World Wide Web in 1991—in the face of doubt, poverty, and general public indifference, Hart devoted himself to typing public-domain texts into computers for the benefit of the wider world. At first, he labored alone; later he managed a corps of volunteers and served readers from all inhabited continents. By the time of his death in 2011, Hart had helped digitize more than thirty-seven thousand books and historical documents—the Bible, the complete works of Shakespeare, Areopagitica, Paradise Lost, The Federalist Papers, The Book of Mormon, The Pickwick Papers, Moby-Dick, O Pioneers!, and scores of lesser-known works—as part of this earliest and perhaps most pure attempt to create and populate a digital library of the future.32

 

‹ Prev