by Steven Levy
Russ Noftsker, the nuts-and-bolts administrator of the AI lab, took the threat of protesters very seriously. These were the days of the Weather Underground, and he feared that wild-eyed radicals were planning to actually blow up the computer. He felt compelled to take certain measures to protect the lab.
Some of the measures were so secretive—perhaps involving government agencies like the CIA, which had an office in Tech Square—that Noftsker would not reveal them, even a decade after the war had ended. But other measures were uncomfortably obvious. He removed the glass on the doors leading from the elevator foyer on the ninth floor to the area where the hackers played with computers. In place of the glass, Noftsker installed steel plates, covering the plates with wood so it would not look as if the area were as barricaded as it actually was. The glass panels beside the door were replaced with half-inch-thick bulletproof Plexiglas so you could see who was petitioning for entry before you unlocked the locks and removed the bolts. Noftsker also made sure the doors had heavy-duty hinges bolted to the walls, so that the protesters would not try to remove the entire door, rush in, and storm the computers.
During the days preceding the demonstration, only people whose names were on an approved list were officially allowed entry to this locked fortress. On the day of the demonstration, he even went so far as to distribute around forty Instamatic cameras to various people, asking them to take pictures of the demonstrators when they ventured outside the protected area. If the demonstrators chose to become violent, at least there would be documentation of the wrongdoers.
The barricades worked insofar as the protesters—around twenty or thirty of them, in Noftsker’s estimate—walked to Tech Square, stayed outside the lab a bit, and left without leveling the PDP-6 with sledgehammers. But the collective sigh of relief on the part of the hackers must have been mixed with much regret. While they had created a lock-less, democratic system within the lab, the hackers were so alienated from the outside world that they had to use those same hated locks, barricades, and bureaucrat-compiled lists to control access to this idealistic environment. While some might have groused at the presence of the locks, the usual free access guerrilla fervor did not seem to be applied in this case. Some of the hackers, shaken at the possibility of a rout, even rigged the elevator system so that the elevators could not go directly to the ninth floor. Though previously some of the hackers had declared, “I will not work in a place that has locks,” after the demonstrations were over, and after the restricted lists were long gone, the locks remained. Generally, the hackers chose not to view the locks as symbols of how far removed they were from the mainstream.
A very determined solipsism reigned on the ninth floor, a solipsism that stood its ground even when hackerism suffered some direct, though certainly less physically threatening, attacks in publications and journals. It was tough to ignore, however, the most vicious of these, since it came from within MIT, from a professor of Computer Science (yes, MIT had come around and started a department) named Joseph Weizenbaum. A former programmer himself, a thin, mustachioed man who spoke with a rolling Eastern European accent, Weizenbaum had been at MIT since 1963, but had rarely interacted with the hackers. His biggest programming contribution to AI had been a program called ELIZA, which carried on a conversation with the user; the computer would take the role of a therapist. Weizenbaum recognized the computer’s power, and was disturbed to note how seriously users would interact with ELIZA. Even though people knew it was “only” a computer program, they would tell it their most personal secrets. To Weizenbaum, it was a demonstration of how the computer’s power could lead to irrational, almost addictive behavior, with dehumanizing consequences. And Weizenbaum thought that hackers—or “compulsive programmers”—were the ultimate in computer dehumanization. In what was to become a notorious passage, he wrote, in Computer Power and Human Reason:
. . . bright young men of disheveled appearance, often with sunken glowing eyes, can be seen sitting at computer consoles, their arms tensed and waiting to fire their fingers, already poised to strike, at the buttons and keys on which their attention seems to be riveted as a gambler’s on the rolling dice. When not so transfixed, they often sit at tables strewn with computer printouts over which they pore like possessed students of a cabbalistic text. They work until they nearly drop, twenty, thirty hours at a time. Their food, if they arrange it, is brought to them: coffee, Cokes, sandwiches. If possible, they sleep on cots near the printouts. Their rumpled clothes, their unwashed and unshaven faces, and their uncombed hair all testify that they are oblivious to their bodies and to the world in which they move. These are computer bums, compulsive programmers . . .
Weizenbaum would later say that the vividness of this description came from his own experience as a hacker of sorts, and was not directly based on observations of the ninth-floor culture. But many hackers felt otherwise. Several thought that Weizenbaum had identified them personally, even invaded their privacy in his description. Some others guessed that Greenblatt had been unfairly singled out; indeed, Greenblatt did send Weizenbaum some messages objecting to the screed.
Still, there was no general introspection resulting from this or any other attack on the hacker life-style. That was not the way of the lab. Hackers would not generally delve into each other’s psychological makeups. “There was a set of shared goals”—Tom Knight would later explain—“a set of shared intellectual excitement, even to a large degree a set of shared social life, but there was also a boundary which people were nervous to go beyond.”
It was this unspoken boundary that came to bother hacker David Silver. He joined the lab as an adolescent and literally came to maturity there, and besides his productive hacking he spent time thinking about the relationship between hackers and computers. He came to be fascinated at how all of them got so attached to, so intimately connected with something as simple as the PDP-6. It was almost terrifying: thinking about this made David Silver wonder what it was that connected people together, how people found each other, why people got along . . . when something relatively simple like the PDP-6 drew the hackers so close. The whole subject made him wonder on the one hand whether people were just fancy kinds of computers or on the other hand whether they were images of God as a spirit.
These introspections were not things he necessarily shared with his mentors, like Greenblatt or Gosper. “I don’t think people had sort of warm conversations with each other,” he would later say. “That wasn’t the focus. The focus was on sheer brainpower.” This was the case even with Gosper: Silver’s apprenticeship with him was not so much a warm human relationship, he’d later reflect, as “a hacker relationship,” very close in terms of what they shared in terms of the computer, but not imbued with the richness of a real-world friendship.
“There were many, many, many years that went by when all I did was hack computers, and I didn’t feel lonely, like I was missing anything,” Silver would say. “But I guess as I started to grow up more, round out more, change more, become less eccentric in certain ways, I started needing more input from people. [By not going to high school] I bypassed all that social stuff and went right into this blue-sky think tank . . . I spent my lifetime walking around talking like a robot, talking to a bunch of other robots.”
Sometimes the hacker failure to be deeply personal had grim consequences. The lab might have been the ideal location for guru-level hackers, but for some the pressure was too much. Even the physical layout of the place promoted a certain high-tension feeling, with the open terminals, the constant intimidating presence of the greatest computer programmers in the world, the cold air and the endless hum of the air conditioners. At one point a research firm was called in to do a study of the excessive, inescapable noise, and they concluded that the hum of the air conditioner was so bothersome because there weren’t enough competing noises—so they fixed the machines to make them give off a loud, continual hiss. In Greenblatt’s words, this change “was not a win,” and the constant hiss made the long hours o
n the ninth floor rather nerve-racking for some. Add that to other factors—lack of sleep, missed meals to the point of malnutrition, and a driving passion to finish that hack—and it was clear why some hackers went straight over the edge.
Greenblatt was best at spotting “the classical syndrome of various kinds of losses,” as he called it. “In a certain way, I was concerned about the fact that we couldn’t have people dropping dead all over the place.” Greenblatt would sometimes tell people to go home for a while, take it easy. Other things were beyond him. For instance, drugs. One night, while driving back from a Chinese meal, a young hacker turned to him and asked, not kidding, if he wanted to “shoot up.” Greenblatt was flabbergasted. The real world was penetrating again, and there was little Greenblatt could do. One night not long afterward, that particular hacker leapt off the Harvard bridge into the ice-covered Charles river and was severely injured. It was not the only suicide attempt by an AI lab hacker.
From that evidence alone, it would seem that Weizenbaum’s point was well taken. But there was much more to it than that. Weizenbaum did not acknowledge the beauty of the hacker devotion itself . . . or the very idealism of the Hacker Ethic. He had not seen, as Ed Fredkin had, Stew Nelson composing code on the TECO editor while Greenblatt and Gosper watched: without any of the three saying a word, Nelson was entertaining the others, encoding assembly-language tricks which to them, with their absolute mastery of that PDP-6 “language,” had the same effect as hilariously incisive jokes. And after every few instructions there would be another punch line in this sublime form of communication . . . The scene was a demonstration of sharing which Fredkin never forgot.
While conceding that hacker relationships were unusual, especially in that most hackers lived asexual lives, Fredkin would later say that “they were living the future of computers . . . They just had fun. They knew they were elite, something special. And I think they appreciated each other. They were all different, but each knew something great about the other. They all respected each other. I don’t know if anything like [that hacker culture] has happened in the world. I would say they kind of loved each other.”
The hackers focused on the magic of computers instead of human emotions, but they also could be touched by other people. A prime example would be the case of Louis Merton (a pseudonym). Merton was an MIT student, somewhat reserved, and an exceptional chess player. Save for the last trait, Greenblatt at first thought him well within the spectrum of random people who might wander into the lab.
The fact that Merton was such a good chess player pleased Greenblatt, who was then working to build an actual computer which would run a souped-up version of his chess program. Merton learned some programming, and joined Greenblatt on the project. He later did his own chess program on a little-used PDP-7 on the ninth floor. Merton was enthusiastic about chess and computers, and there was little to foreshadow what happened during the Thanksgiving break in late 1966, when, in the little theater-like AI “playroom” on Tech Square’s eighth floor (where Professor Seymour Papert and a group were working on the educational LOGO computer language), Merton temporarily turned into a vegetable. He assumed a classic position of catatonia, rigidly sitting upright, hands clenched into fists at his side. He would not respond to questions, would not even acknowledge the existence of anything outside himself. People didn’t know what to do. They called up the MIT infirmary and were told to call the Cambridge police, who carted poor Merton away. The incident severely shook the hackers, including Greenblatt, who found out about it when he returned from a holiday visit home.
Merton was not one of the premier hackers. Greenblatt was not an intimate friend. Nonetheless, Greenblatt immediately drove out to Westboro State Hospital to recover Merton. It was a long drive, and the destination reminded Greenblatt of something out of the Middle Ages. Less a hospital than a prison. Greenblatt became determined not to leave until he got Merton out. The last step in this tortuous process was getting the signature of an elderly, apparently senile doctor. “Exactly [like something] out of a horror film,” Greenblatt later recalled. “He was unable to read. This random attendant type would say, ‘Sign here. Sign here.’”
It turned out that Merton had a history of these problems. Unlike most catatonics, Merton would improve after a few days, especially when he was given medicine. Often, when he went catatonic somewhere, whoever found him would call someone to take him away, and the doctors would give a diagnosis of permanent catatonia even as Merton was coming to life again. He would call up the AI lab and say. “Help,” and someone, often Greenblatt, would come and get him.
Later, someone discovered in MIT records a letter from Merton’s late mother. The letter explained that Louis was a strange boy, and he sometimes would go stiff. In that case, all you needed to do was to ask, “Louis, would you like to play a game of chess?” Fredkin, who had also taken all interest in Merton, tried this. Merton one day stiffened on the edge of his chair, totally in sculpture mode. Fredkin asked him if he’d like to play chess, and Merton stiffly marched over to the chess board. The game got under way with Fredkin chatting away in a rather one-sided conversation, but suddenly Merton just stopped. Fredkin asked, “Louis; why don’t you move?” After a very long pause, Merton responded in a guttural, slow voice, “Your . . . king’s . . . in . . . check.” Fredkin had inadvertently uncovered the check from his last move.
Merton’s condition could be mitigated by a certain medicine, but for reasons of his own he almost never took it. Greenblatt would plead with him, but he’d refuse. Once Greenblatt went to Fredkin to ask him to help out; Fredkin went back with Greenblatt to find Merton stiff and unresponsive.
“Louis, how come you’re not taking your medicine?” he asked. Merton just sat there, a weak smile frozen on his face. “Why won’t you take it?” Fredkin repeated.
Suddenly, Merton reared back and walloped Fredkin on the chin. That kind of behavior was one of Merton’s unfortunate features. But the hackers showed remarkable tolerance. They did not dismiss him as a loser. Fredkin considered Merton’s case a good example of the essential humanity of the group which Weizenbaum had, in effect, dismissed as emotionless androids. “He’s just crazy,” Minsky would later say of Weizenbaum. “These [hackers] are the most sensitive, honorable people that have ever lived.” Hyperbole, perhaps, but it was true that behind their single-mindedness there was warmth, in the collective realization of the Hacker Ethic. As much as any devout religious order, the hackers had sacrificed what outsiders would consider basic emotional behavior—for the love of hacking.
David Silver, who would eventually leave the order, was still in awe of that beautiful sacrifice years later: “It was sort of necessary for these people to be extremely brilliant and in some sense, handicapped socially so that they would just kind of concentrate on this one thing.” Hacking. The most important thing in the world to them.
• • • • • • • •
The computer world outside Cambridge did not stand still while the Hacker Ethic flourished on the ninth floor of Tech Square. By the late 1960s, hackerism was spreading, partly because of the proliferation of interactive machines like the PDP-10 or the XDS-940, partly because of friendly programming environments (such as the one hackers had created at MIT), and partly because MIT veterans would leave the lab and carry their culture to new places. But the heart of the movement was this: people who wanted to hack were finding computers to hack on.
These computers were not necessarily at MIT. Centers of hacker culture were growing at various institutions around the country, from Stanford to Carnegie-Mellon. And as these other centers reached critical mass—enough dedicated people to hack a large system and go on nightly pilgrimages to local Chinese restaurants—they became tempting enough to lure some of the AI lab hackers away from Tech Square. The intense MIT style of hackerism would be exported through these emissaries.
Sometimes it would not be an institution that hackers moved to, but a business. A programmer named Mike Levitt began a l
eading-edge technology firm called Systems Concepts in San Francisco. He was smart enough to recruit phone-and-PDP-1 hacker Stew Nelson as a partner; TX-0 music master Peter Samson also joined this high-tech hardware design-and-manufacture business. All in all, the small company managed to get a lot of the concentrated talent around Tech Square out to San Francisco. This was no small feat, since hackers were generally opposed to the requirements of California life, particularly driving and recreational exposure to the sun. But Nelson had learned his lesson earlier—despite Fredkin’s repeated urgings in the mid-sixties, he’d refused to go to Triple-I’s new Los Angeles headquarters until, one day, after emphatically reiterating his vow, he stormed out of Tech Square without a coat. It happened to be the coldest day of the Cambridge winter that year, and as soon as he walked outside his glasses cracked from the sudden change of temperature. He walked straight back to Fredkin’s office, his eyebrows covered with icicles, and said, “I’m going to Los Angeles.”
In some cases, a hacker’s departure would be hastened by what Minsky and Ed Fredkin called “social engineering.” Sometimes the planners would find a hacker getting into a rut, perhaps stuck on some systems problem, or maybe becoming so fixated on extracurricular activities, like lock hacking or phone hacking, that planners deemed his work no longer “interesting.” Fredkin would later recall that hackers could get into a certain state where they were “like anchors dragging the thing down. Time had gone by them, in some sense. They needed to get out of the lab and the lab needed them out. So some surprising offer would come to those persons, or some visit arranged, usually someplace far, far away. These people started filtering out in the world to companies or other labs. It wasn’t fate—I would arrange it.”