Book Read Free

The Idealists

Page 22

by Justin Peters


  MIT’s public reputation for openness extends to the wider world. The front doors to its main building on Massachusetts Avenue, the imposing Building 7, are always unlocked. For years, local drama groups conducted impromptu rehearsals in vacant MIT classrooms. In 2010, any stranger could show up to MIT, unfold a laptop computer, connect to its wireless Internet network, and retain the connection for a full two weeks; guests were even allowed to access MIT’s library resources.

  As a Safra Center fellow, Swartz had access to JSTOR via Harvard’s library. So why did he choose to deploy his crawler at MIT, a school with which he was not formally affiliated? One possible reason is that computer-aided bulk-downloading violated JSTOR’s stated terms of service, and, for that reason, Swartz may have preferred to remain anonymous. MIT might even have seemed to him like the sort of place that would be unbothered by, and possibly encourage, his actions. But Swartz would soon realize that MIT’s public image did not directly align with reality.

  * * *

  THE Massachusetts Institute of Technology is not configured to address the sorts of philosophical and ethical questions that one might expect a great university to address, in part because it is not and has never been a university. It is, rather, a technical institute. (This distinction might seem like a minor semantic point, but our self-descriptions set implicit boundaries that we are often loath to cross.) Ever since its establishment in 1861, the institute has trained engineers and pursued practical applications for existing and emerging technologies. A center for applied thinking and science, it specializes in the practical rather than the philosophical. It is, as the scholar Joss Winn once put it, “the model capitalist university.”48

  In 1919, three years after the school relocated from central Boston to its current location on the Charles River in Cambridge, MIT president Richard Maclaurin advocated a “Technology Plan” that would help the institute foster closer ties with industry. Institute benefactor and camera magnate George Eastman, whose generous donations helped underwrite the school’s relocation, had assured Maclaurin that MIT would “rise to a position of transcendent usefulness,” and the Technology Plan would speed its ascent.49

  On January 10, 1920, addressing a room of MIT alumni, Maclaurin said that the plan would allow MIT to “give industrial corporations the information that they want regarding men and scientific processes that are applicable to their industry. A mere school might not be able to do this, but an institution conceived so broadly as Technology [MIT] is well adapted for this great end.”50 Maclaurin died suddenly five days later, but the Technology Plan survived in the form of the new Division of Industrial Cooperation and Research, which was charged with marketing the school’s “scientific and industrial experience and creative aptitude” to companies willing and able to purchase such things.51 The program was financially successful, and the lessons MIT learned from its administration put the school in a position to acquire and manage millions of dollars’ worth of government contracts after the United States entered World War II.

  Former MIT engineering dean Vannevar Bush served as President Franklin Roosevelt’s science adviser during the war and directed large amounts of money toward university laboratories in an effort to develop technologies that could aid the war effort. As Bush later noted, “World War II was the first war in human history to be affected decisively by weapons unknown at the outbreak of hostilities,” which “demanded a closer linkage among military men, scientists, and industrialists than had ever before been required.”52 That linkage was particularly strong at MIT, where researchers at the school’s Radiation Laboratory developed microwave radar systems for the US military and “practically every member of the MIT Physics Department was involved in some form of war work,” as the department itself has stated.53 Academic science helped the Allies win the war—and the war helped the Allies win over academic science.

  In an essay for the anthology Big Science, S. S. Schweber wrote, “By participating massively in the war effort the Institute had transformed itself. By 1943 MIT was overseeing some $25,000,000 of government-supported contracts, whereas its total budget in 1939 was only about $3,000,000.”54 After the war, Bush and others argued for the continuance of these financial ties and recommended the creation of a federal body that would supervise and direct this conjoinment. Government support would revolutionize American academic scientific research, argued Bush in his report Science—The Endless Frontier, germinating a new golden age of pure science and, eventually, “productive advance.” Measures would be taken to ensure the separation of scientist and state and encourage a research environment that was “relatively free from the adverse pressure of convention, prejudice, or commercial necessity.”55 Federal funding would, in fact, save the scientist from the indignities of industrial research. The American people would benefit from the partnership, and eventually so would the world.

  Bush’s benign vision for the future of academic science in the United States translated to a boon for MIT and other domestic research universities. Expansive federal support for scientific research made it easier for scientists to fund expensive experiments that had no immediate practical applications. Like privately held companies that decide to go public to fund growth and expansion, universities reaped immediate benefits from government partnerships—more money meant more hires, new facilities, and increased prestige—but they also ceded some control over their institutional priorities. In his great book The Cold War and American Science, the Johns Hopkins professor Stuart W. Leslie observed that the long-term effects of military-funded university research cannot be measured in merely economic terms, but must also be measured “in terms of our scientific community’s diminished capacity to comprehend and manipulate the world for other than military ends.”56

  The same proposition holds true for the effect that corporate funding has had on academic science. In his seminal 1942 essay, “The Normative Structure of Science,” the American sociologist Robert K. Merton famously listed the fundamental tenets of what he deemed the “scientific ethos”—the scientist’s code of honor, so to speak—philosophical norms that governed inquiry and activity in all respectable laboratories, and without which no credible research could proceed. These four norms are universalism, communism, disinterestedness, and organized skepticism.57

  Although the word has many political associations, Merton’s term communism simply meant that the products of scientific research belong to the community, and that scientists are rewarded for their discoveries not by money, but by “recognition and esteem.” Isaac Newton was the first to describe his second law of motion, for instance, but that fact does not entitle his descendants to royalties whenever an object comes to rest and stays at rest. Scientific discoveries “constitute a common heritage in which the equity of the individual producer is severely limited,” wrote Merton. “Property rights in science are whittled down to a bare minimum by the rationale of the scientific ethic.”58

  Merton further noted the absolute incompatibility of the communal scientific ethos with “the definition of technology as ‘private property’ in a capitalistic economy.”59 As capitalism and academic science continued to coalesce in the twentieth century, this communal ethos was put at risk.

  Today, MIT’s own website proudly announces that it “ranks first in industry-financed research and development and development expenditures among all universities and colleges without a medical school.”60 In her fascinating 2002 dissertation, “Flux and Flexibility,” the MIT doctoral student Sachi Hatakenaka traced the school’s modern-day corporate partnerships and their effect on institutional structure and priorities. Though MIT has long been a cheerful collaborator with industry, the practice has expanded over the last forty years; Hatakenaka reported that industrial research funding at MIT jumped from $1,994,000 per year in 1970 to $74,405,000 per year in 1999.61

  In the 1980s, conscious of increased competition for federal science grants and fearful that the inevitable end of the Cold War would prompt a decline in federa
l research funding, many American universities sought a broader range of industry partnerships.62 Around this time, Hatakenaka noted, MIT began to encourage corporate sponsorship of its research laboratories. These alliances were attractive to corporations, as well, many of which had slashed their internal research and development budgets in the midst of economic recession. Hatakenaka wrote that consulting with industry became “something that almost all [MIT] engineering faculty members are expected to undertake, partly to supplement their own income, but more importantly, to keep up with practice.”63

  In her dissertation, Hatakenaka carefully outlined all of the boundaries that MIT has erected to ensure both that corporate sponsors have no direct control over specific research projects and that MIT researchers can maintain their scholarly independence. But the institute’s increasing reliance on corporate contributions perforce affects the administration’s attitude toward the free market, and toward anything that might jeopardize its profitable partnerships.

  The federal support that Vannevar Bush believed would free academic scientists from the need to collaborate with industry has ended up pushing them more firmly into industry’s embrace. Initially, the federal government retained title to all of the scientific research that it funded. For example, if the government gave a university lab a grant to study computing, and the laboratory used that grant money to develop a new type of microprocessor, then the government owned the rights to that microprocessor and could license those rights to private industry. This setup changed in 1980, when President Jimmy Carter signed into law the Bayh-Dole Act, which effectively privatized the fruits of publicly funded research.

  Bayh-Dole was meant to address the perceived technology gap between Japan and the United States around the time of its passage, and reduce the gap between when a useful technology was developed and when it was brought to market. It decreed that, henceforth, domestic universities would retain the rights to the results of federally funded research and could patent those inventions, license them to industry, and reap the resultant profits.

  After Bayh-Dole became law, universities began to establish what they referred to as “technology transfer offices”: administrative divisions that existed to facilitate patent licensing and to liaise between academic researchers and corporate customers. The act allowed the fruits of academic research to be harvested and sold with unprecedented ease and rapidity, and the ensuing licensing fees made many universities wealthy. When the sale or rental of intellectual property becomes a university profit center, then research outcomes will inevitably become a proprietary concern. “Far from being independent watchdogs capable of dispassionate inquiry,” wrote Jennifer Washburn in her sobering book University, Inc., “universities are increasingly joined at the hip to the very market forces the public has entrusted them to check, creating problems that extend far beyond the research lab.”64

  “Free access to scientific pursuits is a functional imperative” for scientists, wrote Robert K. Merton in 1942.65 The hacker ethic was, in a sense, a critique of applied, corporate science in the university, of the move from Mertonian universalism and communism toward proprietary research. But the hackers gradually left MIT, and the school’s center for innovative computing shifted from the AI Lab to the Media Lab, Robert Swartz’s employer, a loose affiliation of varied research groups that specialized in applied consumer technologies funded by a large array of corporate sponsors.

  In 2008, for example, Bank of America committed $3–$5 million per year for five years to sponsor an MIT Media Lab research group called the Center for Future Banking. “We are bringing together the creative, multidisciplinary research of Media Lab faculty and students with the real-world business experience and deep-domain knowledge of our Bank of America colleagues—all in a highly innovative environment that promotes unconventional thinking and risk-taking,” the director of the Media Lab said at the time.66 The AI Lab hackers had hoped that their work would change the world. The Media Lab researchers just wanted to make it easier to bank there.

  The unlocked campus, the student pranks, the accessible computer network: these are the public trappings of translucence, vestiges of an era when MIT did perhaps take it seriously, or else decoys to deflect attention from the fact that it had never really done so. In 2002, when he was still fifteen, Swartz traveled to MIT to speak about the Semantic Web. At that time, MIT’s wireless Internet network did not allow guest access, and Swartz had to use a public computer terminal to get online. Unfortunately, the only non-password-protected terminals he could locate were behind a locked door. “I joked that I should crawl thru the airvent and ‘liberate’ the terminals, as in MIT’s hacker days of yore,” Swartz recalled on his blog. “A professor of physics who overheard me said ‘Hackers? We don’t have any of those at MIT!’ ”67

  * * *

  ON December 26, 2010, JSTOR realized that the MIT hacker had returned. “Woot . . . mit scraper is back,” read the subject line of an internal e-mail speculating that the hackers were working out of the Dorrance Building on MIT’s campus.68 “87 GB of PDFs this time, that’s no small feat, requires Organization,” one JSTOR employee wrote. “The script itself isn’t very smart, but the activity is organized and on purpose.”69

  The harvesting sessions had ceased only because Swartz had been out of town for a couple of months. In mid-October of 2010, Swartz traveled to Urbana, Illinois, the hometown of Michael Hart, to speak at Reflections | Projections, a conference hosted by computing students at the University of Illinois. (There is no evidence that Swartz and Hart ever met or corresponded.) Swartz’s topic was “The Social Responsibility of Computer Science,” and he argued that computer programmers have an ethical responsibility to advance the public welfare. He spoke about utilitarianism, and the coder’s special ability to write simple programs that could automate and speed tedious, mundane activities, and complete countless tasks in the time it would ordinarily take to complete just one. “Now, as programmers, we have sort of special abilities. We almost have a magic power,” Swartz said. “But with great power comes great responsibility, and we need to think about the good that we can do with this magical ability. We need to think about, from a utilitarian perspective, what’s the greatest good we can achieve in the world at small cost to ourselves?”70

  In early November 2010, Wikler and Swartz went to Washington, DC, to volunteer for the Democratic National Committee in the days preceding that year’s midterm elections. Swartz was assigned to work under Taren Stinebrickner-Kauffman, a political activist who served as project manager for a telephone-outreach tool that helped volunteers contact voters in key states and districts. “Nobody really knew Aaron, so he sort of got plopped onto my team and, needless to say, was very helpful,” Stinebrickner-Kauffman remembered. “It’s hard to imagine a better last-minute volunteer to come in when you’re working on a political technology project.”71

  When Swartz wasn’t working on robocalls and performing other menial tasks for the DNC, he was observing how campaign technology worked and thinking about ways to make it work better. “We were crashing at a friend’s place, talking until five in the morning afterwards, talking about how technology could do something vastly more powerful and politically impactful,” Wikler recalled.72 The election ended, and Wikler tried to persuade Swartz to remain in the capital for a few days to attend Stinebrickner-Kauffman’s birthday party. Swartz declined and returned to Cambridge to revisit his own powerful and politically impactful utilitarian project.

  In November of 2010, he found a wiring and telephony closet in the basement of the Dorrance Building—also known as Building 16—jacked his laptop directly into the campus network, and resumed his downloading. Swartz had refined his tactics: the script no longer triggered any of JSTOR’s download thresholds. (The revolution will, after all, be A/B tested.) The downloads weren’t detected until late December.

  “I am starting to feel like they [MIT] need to get a hold of this situation and right away or we need to offer to send them so
me help (read FBI),” an aggravated JSTOR staffer wrote on December 26.73 At the time, MIT’s libraries had closed, and the school’s librarians were on budget furlough until January and unable to do any work in the meantime. Thus Ellen Finnie Duranceau of MIT didn’t receive any of JSTOR’s increasingly frantic e-mails until January 3. On January 4, she noted that MIT was unlikely to be able to identify the culprit. “I wish I could say otherwise,” she wrote, “because I realize that JSTOR would like more information and would like us to track the downloaded content to the source.”74

  But by the time Duranceau sent JSTOR the disappointing news, an MIT network engineer had already traced the downloads to a network switch in the Building 16 wiring closet. When he entered the closet, the engineer immediately noticed something odd. Though MIT exclusively uses light blue Internet cables, an off-white cable was plugged into the network switch, leading from the switch to an object concealed under a cardboard box. Lifting the box, the engineer discovered Swartz’s laptop. He called one of his colleagues. Then he called the MIT police.75

  The MIT police, concluding that the investigation required specialized skills that they did not themselves possess, promptly called a Cambridge police detective named Joseph Murphy, who belonged to a regional computer-crime task force. Murphy drove to the scene, accompanied by two other task-force members: a Boston police officer named Tim Laham and a Secret Service agent named Michael Pickett. They arrived at the Dorrance Building’s basement around 11:00 a.m., and soon the wiring closet was outfitted with a motion-activated camera connected to the campus security network and devices that would log the laptop’s download activity and alert MIT officials if the computer was removed from the network.76 At 3:26 p.m. on January 4, the camera captured footage of Swartz entering the closet to check on the laptop and swap out hard drives. The police had a face. Now they needed a name.

 

‹ Prev