by Steven Levy
• • • • • • • •
The one person who was most affected by the schism, and its effect on the AI lab, was Richard Stallman. He grieved at the lab’s failure to uphold the Hacker Ethic. RMS would tell strangers he met that his wife had died, and it would not be until later in the conversation that the stranger would realize that this thin, plaintive youngster was talking about an institution rather than a tragically lost bride.
Stallman later wrote his thoughts into the computer:
It is painful for me to bring back the memories of this time. The people remaining at the lab were the professors, students, and nonhacker researchers, who did not know how to maintain the system, or the hardware, or want to know. Machines began to break and never be fixed; sometimes they just got thrown out. Needed changes in software could not be made. The non-hackers reacted to this by turning to commercial systems, bringing with them fascism and license agreements. I used to wander through the lab, through the rooms so empty at night where they used to be full and think, “Oh my poor AI lab! You are dying and I can’t save you.” Everyone expected that if more hackers were trained, Symbolics would hire them away, so it didn’t even seem worth trying . . . the whole culture was wiped out . . .
Stallman bemoaned the fact that it was no longer easy to drop in or call around dinnertime and find a group eager for a Chinese dinner. He would call the lab’s number, which ended in 6765 (“Fibonacci of 20,” people used to note, pointing out a numerical trait established early on by some random math hacker), and find no one to eat with, no one to talk with.
Richard Stallman felt he had identified the villain who destroyed the lab: Symbolics. He took an oath: “I will never use a Symbolic LISP machine or help anybody else to do so . . . I don’t want to speak to anyone who works for Symbolics or the people who deal with them.” While he also disapproved of Greenblatt’s LMI company, because as a business it sold computer programs which Stallman believed the world should have for free, he felt that LMI had attempted to avoid hurting the AI lab. But Symbolics, in Stallman’s view, had purposely stripped the lab of its hackers in order to prevent them from donating competing technology to the public domain.
Stallman wanted to fight back. His field of battle was the LISP operating system, which originally was shared by MIT, LMI, and Symbolics. This changed when Symbolics decided that the fruits of its labor would be proprietary; why should LMI benefit from improvements made by Symbolics hackers? So there would be no sharing. Instead of two companies pooling energy toward an ultimately featureful operating system, they would have to work independently, expending energy to duplicate improvements.
This was RMS’s opportunity for revenge. He set aside his qualms about LMI and began cooperating with that firm. Since he was still officially at MIT and Symbolics installed its improvements on the MIT machines, Stallman was able to carefully reconstruct each new feature or fix of a bug. He then would ponder how the change was made, match it, and present his work to LMI. It was not easy work, since he could not merely duplicate the changes—he had to figure out innovatively different ways to implement them. “I don’t think there’s anything immoral about copying code,” he explained. “But they would sue LMI if I copied their code, therefore I have to do a lot of work.” A virtual John Henry of computer code, RMS had single-handedly attempted to match the work of over a dozen world-class hackers, and managed to keep doing it during most of 1982 and almost all of 1983. “In a fairly real sense,” Greenblatt noted at the time, “he’s been outhacking the whole bunch of them.”
Some Symbolics hackers complained not so much because of what Stallman was doing, but because they disagreed with some of the technical choices Stallman made in implementation. “I really wonder if those people aren’t kidding themselves,” said Bill Gosper, himself torn between loyalty to Symbolics and admiration for Stallman’s master hack. “Or if they’re being fair. I can see something Stallman wrote, and I might decide it was bad (probably not, but someone could convince me it was bad), and I would still say, ‘But wait a minute—Stallman doesn’t have anybody to argue with all night over there. He’s working alone! It’s incredible anyone could do this alone!’”
Russ Noftsker, president of Symbolics, did not share Greenblatt’s or Gosper’s admiration. He would sit in Symbolics’ offices, relatively plush and well decorated compared to LMI’s ramshackle headquarters a mile away, his boyish face knotting with concern when he spoke of Stallman. “We develop a program or an advancement to our operating system and make it work, and that may take three months, and then under our agreement with MIT, we give that to them. And then [Stallman] compares it with the old ones and looks at that and sees how it works and reimplements it [for the LMI machines]. He calls it reverse engineering. We call it theft of trade secrets. It does not serve any purpose at MIT for him to do that because we’ve already given that function out [to MIT]. The only purpose it serves is to give that to Greenblatt’s people.”
Which was exactly the point. Stallman had no illusions that his act would significantly improve the world at large. He had come to accept that the domain around the AI lab had been permanently polluted. He was out to cause as much damage to the culprit as he could. He knew he could not keep it up indefinitely. He set a deadline to his work: the end of 1983. After that he was uncertain of his next step.
He considered himself the last true hacker left on earth. “The AI lab used to be the one example that showed it was possible to have an institution that was anarchistic and very great,” he would explain. “If I told people it’s possible to have no security on a computer without people deleting your files all the time and no bosses stopping you from doing things, at least I could point to the AI lab and say, ‘Look, we are doing it. Come use our machine! See!’ I can’t do that anymore. Without this example, nobody will believe me. For a while we were setting an example for the rest of the world. Now that this is gone, where am I going to begin from? I read a book the other day. It’s called Ishi, the Last Yahi. It’s a book about the last survivor of a tribe of Indians, initially with his family, and then gradually they died out one by one.”
That was the way Richard Stallman felt. Like Ishi.
“I’m the last survivor of a dead culture,” said RMS. “And I don’t really belong in the world anymore. And in some ways I feel I ought to be dead.”
Richard Stallman did leave MIT, but he left with a plan: to write a version of the popular proprietary computer operating system called UNIX and give it away to anyone who wanted it. Working on this GNU (which stood for “Gnu’s Not Unix”) program meant that he could “continue to use computers without violating [his] principles.” Having seen that the Hacker Ethic could not survive in the unadulterated form in which it had formerly thrived at MIT, he realized that numerous small acts like his would keep the Ethic alive in the outside world.
• • • • • • • •
What Stallman did was to join a mass movement of real-world hackerism set in motion at the very institution which he was so painfully leaving. The emergence of hackerism at MIT twenty-five years before was a concentrated attempt to fully ingest the magic of the computer; to absorb, explore, and expand the intricacies of those bewitching systems; to use those perfectly logical systems as an inspiration for a culture and a way of life. It was these goals which motivated the behavior of Lee Felsenstein and the hardware hackers from Albuquerque to the Bay Area. The happy byproduct of their actions was the personal computer industry, which exposed the magic to millions of people. Only the tiniest percentage of these new computer users would experience that magic with the all-encompassing fury of the MIT hackers, but everyone had the chance to...and many would get glimpses of the miraculous possibilities of the machine. It would extend their powers, spur their creativity, and teach them something, perhaps, of the Hacker Ethic, if they listened.
As the computer revolution grew in a dizzying upward spiral of silicon, money, hype, and idealism, the Hacker Ethic became perhaps less pure, an i
nevitable result of its conflict with the values of the outside world. But its ideas spread throughout the culture each time some user flicked the machine on, and the screen came alive with words, thoughts, pictures, and sometimes elaborate worlds built out of air—those computer programs which could make any man (or woman) a god.
Sometimes the purer pioneers were astounded at their progeny. Bill Gosper, for instance, was startled by an encounter in the spring of 1983. Though Gosper worked for the Symbolics company and realized that he had sold out, in a sense, by hacking in the commercial sector, he was still very much the Bill Gosper who once sat at the ninth-floor PDP-6 like some gregarious alchemist of code. You could find him in the wee hours in a second-floor room near El Camino Real in Palo Alto, his beat-up Volvo the only car in the small lot outside the nondescript two-story building that housed Symbolics’ West Coast research center. Gosper, now forty, his sharp features hidden behind large wireframe glasses and his hair knotted in a ponytail which came halfway down his back, still hacked LIFE, watching with rollicking amusement as the terminal of his LISP machine cranked through billions of generations of LIFE colonies.
“I had the most amazing experience when I went to see Return of the Jedi,” Gosper said. “I sat down next to this kid of fifteen or sixteen. I asked him what he did, and he said, ‘Oh, I’m basically a hacker.’ I almost fell over. I didn’t say anything. I was completely unprepared for that. It sounded like the most arrogant thing I ever heard.”
The youngster had not been boasting, of course, but describing who he was. Third-Generation hacker. With many more generations to follow.
To the pioneers like Lee Felsenstein, that continuation represented a goal fulfilled. The designer of the Sol and the Osborne 1, the cofounder of Community Memory, the hero of the pseudo-Heinlein novel of his own imagination often would boast that he had been “present at the creation,” and he saw the effects of the boom that followed at a close enough range to see its limitations and its subtle, significant influence. After he made his paper fortune at Osborne, he saw it flutter away just as quickly, as poor management and arrogant ideas about the marketplace caused Osborne Computer to collapse within a period of a few months in 1983. He refused to mourn his financial loss. Instead he took pride in celebrating that “the myth of the megamachine bigger than all of us [the evil Hulking Giant, approachable only by the Priesthood] has been laid to rest. We’re able to come back down off worship of the machine.”
Lee Felsenstein had learned to wear a suit with ease, to court women, to charm audiences. But what mattered was still the machine and its impact on people. He had plans for the next step. “There’s more to be done,” he said not long after Osborne Computer went down. “We have to find a relationship between man and machine which is much more symbiotic. It’s one thing to come down from one myth, but you have to replace it with another. I think you start with the tool: the tool is the embodiment of the myth. I’m trying to see how you can explain the future that way, create the future.”
He was proud that his first battle—to bring computers to the people—had been won. Even as he spoke, the Third Generation of hackers was making news, not only as superstar game designers, but as types of culture heroes who defied boundaries and explored computer systems. A blockbuster movie called WarGames had as its protagonist a Third-Generation hacker who, having no knowledge of the groundbreaking feats of Stew Nelson or Captain Crunch, broke into computer systems with the innocent wonder of their Hands-On Imperative. It was one more example of how the computer could spread the Ethic.
“The technology has to be considered as larger than just the inanimate pieces of hardware,” said Felsenstein. “The technology represents inanimate ways of thinking, objectified ways of thinking. The myth we see in WarGames and things like that is definitely the triumph of the individual over the collective dis-spirit. [The myth is] attempting to say that the conventional wisdom and common understandings must always be open to question. It’s not just an academic point. It’s a very fundamental point of, you might say, the survival of humanity, in a sense that you can have people [merely] survive, but humanity is something that’s a little more precious, a little more fragile. So that to be able to defy a culture which states that ‘Thou shalt not touch this,’ and to defy that with one’s own creative powers is . . . the essence.”
The essence, of course, of the Hacker Ethic.
Appendix B. Afterword: Ten Years After
I think that hackers—dedicated, innovative, irreverent computer programmers—are the most interesting and effective body of intellectuals since the framers of the U.S. Constitution . . . No other group that I know of has set out to liberate a technology and succeeded. They not only did so against the active disinterest of corporate America, their success forced corporate America to adopt their style in the end. In reorganizing the Information Age around the individual, via personal computers, the hackers may well have saved the American economy . . . The quietest of all the ’60s sub-subcultures has emerged as the most innovative and powerful.
—Stewart Brand Founder, Whole Earth Catalog
In November 1984, on the damp, windswept headlands north of San Francisco, one hundred fifty canonical programmers and techno-ninjas gathered for the first Hacker Conference. Originally conceived by Whole Earth Catalog founder Stewart Brand, this event transformed an abandoned Army camp into temporary world headquarters for the Hacker Ethic. Not at all coincidentally, the event dovetailed with the publication of this book, and a good number of the characters in its pages turned up, in many cases to meet for the first time. First-generation MIT hackers like Richard Greenblatt hung out with Homebrew luminaries like Lee Felsenstein and Stephen Wozniak and game czars Ken Williams, Jerry Jewell, and Doug Carlston. The brash wizards of the new Macintosh computer met up with people who hacked Spacewar. Everybody slept in bunk beds, washed dishes and bussed tables, and slept minimally. For a few hours the electricity went out, and people gabbed by lantern light. When the power was restored, the rush to the computer room—where one could show off his hacks—was something probably not seen in this country since the last buffalo stampede.
I remember thinking, “These be the real hackers.”
I was in a state of high anxiety, perched among one hundred fifty potential nit-picking critics who had been issued copies of my first book. Those included in the text immediately found their names in the index and proceeded to vet passages for accuracy and technological correctness. Those not in the index sulked, and to this day whenever they encounter me, in person or in the ether of cyberspace, they complain. Ultimately, the experience was exhilarating. The Hacker Conference, which would become an annual event, turned out to be the kickoff for a spirited and public debate, continued to this day, about the future of hacking and the Hacker Ethic as defined in this book.
The term “hacker” has always been bedeviled by discussion. When I was writing this book, the term was still fairly obscure. In fact, some months before publication, my editor told me that people in Doubleday’s sales force requested a title change—“Who knows what a hacker is?” they asked. Fortunately, we stuck with the original, and by the mid-eighties the term had become rooted in the vernacular.
Unfortunately for many true hackers, however, the popularization of the term was a disaster. Why? The word hacker had acquired a specific and negative connotation. The trouble began with some well-publicized arrests of teenagers who electronically ventured into forbidden digital grounds, like government computer systems. It was understandable that the journalists covering these stories would refer to the young perps as hackers—after all, that’s what the kids called themselves. But the word quickly became synonymous with “digital trespasser.”
In the pages of national magazines, in television dramas and movies, in novels both pulp and prestige, a stereotype emerged: the hacker, an antisocial geek whose identifying attribute is the ability to sit in front of a keyboard and conjure up a criminal kind of magic. In these depictions, anything conn
ected to a machine of any sort, from a nuclear missile to a garage door, is easily controlled by the hacker’s bony fingers, tapping away on the keyboard of a cheap PC or a workstation. According to this definition a hacker is at best benign, an innocent who doesn’t realize his true powers. At worst, he is a terrorist. In the past few years, with the emergence of computer viruses, the hacker has been literally transformed into a virulent force.
True, some of the most righteous hackers in history have been known to sneer at details such as property rights or the legal code in order to pursue the Hands-On Imperative. And pranks have always been part of hacking. But the inference that such high jinks were the essence of hacking was not just wrong, it was offensive to true hackers, whose work had changed the world, and whose methods could change the way one viewed the world. To read of talentless junior high school students logging on to computer bulletin boards, downloading system passwords or credit bureau codes, and using them to promote digital mayhem—and have the media call them hackers . . . well, it was just too much for people who considered themselves the real thing. They went apoplectic. The hacker community still seethes at the public burning it received in 1988 at Hacker Conference 5.0, when a reporting crew from CBS News showed up ostensibly to do a story on the glory of canonical hackers—but instead ran a piece loaded with security specialists warning of the Hacker Menace. To this day, I think that Dan Rather would be well advised to avoid attending future Hacker Conferences.
But in the past few years, I think the tide has turned. More and more people have learned about the spirit of true hacking as described in these pages. Not only are the technically literate aware of hacker ideas and ideals, but they appreciate them and realize, as Brand implied, that they are something to nurture.