Book Read Free

Hackers

Page 18

by Steven Levy


  He was not charmed by LIFE. Specifically, he was unhappy that Gosper and the others were spending “unbelievable numbers of hours at the console, staring at those soupy LIFE things” and monopolizing the single 340 terminal. Worst of all, he considered the program they were using as “clearly nonoptimal.” This was something the LIFE hackers readily admitted, but the LIFE case was the rare instance of hackers tolerating some inefficiency. They were so thrilled at the unfolding display of LIFE that they did not want to pause even for the few days it might take to hack up a better program. Greenblatt howled in protest—“the heat level got to be moderately high,” he later admitted—and did not shut up until one of the LIFE hackers wrote a faster program, loaded with utilities that enabled you to go backward and forward for a specified number of generations, focus in on various parts of the screen, and do all sorts of other things to enhance exploration.

  Greenblatt never got the idea. But to Gosper, LIFE was much more than your normal hack. He saw it as a way to “basically do science in a new universe where all the smart guys haven’t already nixed you out two or three hundred years ago. It’s your life story if you’re a mathematician: every time you discover something neat, you discover that Gauss or Newton knew it in his crib. With LIFE you’re the first guy there, and there’s always fun stuff going on. You can do everything from recursive function theory to animal husbandry. There’s a community of people who are sharing these experiences with you. And there’s the sense of connection between you and the environment. The idea of where’s the boundary of a computer. Where does the computer leave off and the environment begin?”

  Obviously, Gosper was hacking LIFE with near-religious intensity. The metaphors implicit in the simulation—of populations, generations, birth, death, survival—were becoming real to him. He began to wonder what the consequences would be if a giant supercomputer were dedicated to LIFE . . . and imagined that eventually some improbable objects might be created from the pattern. The most persistent among them would survive against odds which Gosper, as a mathematician, knew were almost impossible. It would not be randomness which determined survival, but some sort of computer Darwinism. In this game that is a struggle against decay and oblivion, the survivors would be the “maximally persistent states of matter.” Gosper thought that these LIFE forms would have contrived to exist—they would actually have evolved into intelligent entities.

  “Just as rocks wear down in a few billion years, but DNA hangs in there,” he’d later explain. “This intelligent behavior would be just another one of those organizational phenomena like DNA which contrived to increase the probability of survival of some entity. So one tends to suspect, if one’s not a creationist, that very very large LIFE configurations would eventually exhibit intelligent [characteristics]. Speculating what these things could know or could find out is very intriguing . . . and perhaps has implications for our own existence.”

  Gosper was further stimulated by Ed Fredkin’s theory that it is impossible to tell if the universe isn’t a computer simulation, perhaps being run by some hacker in another dimension. Gosper came to speculate that in his imaginary ultimate LIFE machine, the intelligent entities which would form over billions of generations might also engage in those very same speculations. According to the way we understand our own physics, it is impossible to make a perfectly reliable computer. So when an inevitable bug occurred in that super-duper LIFE machine, the intelligent entities in the simulation would have suddenly been presented with a window to the metaphysics which determined their own existence. They would have a clue to how they were really implemented. In that case, Fredkin conjectured, the entities might accurately conclude that they were part of a giant simulation and might want to pray to their implementors by arranging themselves in recognizable patterns, asking in readable code for the implementors to give clues as to what they’re like. Gosper recalls “being offended by that notion, completely unable to wrap my head around it for days, before I accepted it.”

  He accepted it.

  Maybe it is not so surprising. In one sense, that far-flung conjecture was already reality. What were the hackers but gods of information, moving bits of knowledge around in cosmically complex patterns within the PDP-6? What satisfied them more than this power? If one concedes that power corrupts, then one might identify corruption in the hackers’ failure to distribute this power—and the hacker dream itself—beyond the boundaries of the lab. That power was reserved for the winners, an inner circle that might live by the Hacker Ethic but made little attempt to widen the circle beyond those like themselves, driven by curiosity, genius, and the Hands-On Imperative.

  Not long after his immersion in LIFE, Gosper himself got a glimpse of the limits of the tight circle the hackers had drawn. It happened in the man-made daylight of the 1972 Apollo 17 moon shot. He was a passenger on a special cruise to the Caribbean, a “science cruise” timed for the launch, and the boat was loaded with sci-fi writers, futurists, scientists of varying stripes, cultural commentators, and, according to Gosper, “an unbelievable quantity of just completely empty-headed cruise-niks.”

  Gosper was there as part of Marvin Minsky’s party. He got to engage in discussion with the likes of Norman Mailer, Katherine Anne Porter, Isaac Asimov, and Carl Sagan, who impressed Gosper with his Ping-Pong playing. For real competition, Gosper snuck in some forbidden matches with the Indonesian crewmen, who were by far the best players on the boat.

  Apollo 17 was to be the first manned space shot initiated at night, and the cruise boat was sitting three miles off Cape Kennedy for an advantageous view of the launch. Gosper had heard all the arguments against going to the trouble of seeing a liftoff—why not watch it on television, since you’ll be miles away from the actual launching pad? But when he saw the damn thing actually lift off, he appreciated the distance. The night had been set ablaze, and the energy peak got to his very insides. The shirt slapped on his chest, the change in his pocket jingled, and the PA system speakers broke from their brackets on the viewing stand and dangled by their power cords. The rocket, which of course never could have held to so true a course without computers, leapt into the sky, hell-bent for the cosmos like some flaming avenger, a Spacewar nightmare; the cruise-niks were stunned into trances by the power and glory of the sight. The Indonesian crewmen went berserk. Gosper later recalled them running around in a panic and throwing their Ping-Pong equipment overboard, “like some kind of sacrifice.”

  The sight affected Gosper profoundly. Before that night, Gosper had disdained NASA’s human-wave approach toward things. He had been adamant in defending the AI lab’s more individualistic form of hacker elegance in programming, and in computing style in general. But now he saw how the real world, when it got its mind made up, could have an astounding effect. NASA had not applied the Hacker Ethic, yet it had done something the lab, for all its pioneering, never could have done. Gosper realized that the ninth-floor hackers were in some sense deluding themselves, working on machines of relatively little power compared to the computers of the future—yet still trying to do it all, change the world right there in the lab. And since the state of computing had not yet developed machines with the power to change the world at large—certainly nothing to make your chest rumble as did the NASA operation—all that the hackers wound up doing was making Tools to Make Tools. It was embarrassing.

  Gosper’s revelation led him to believe that the hackers could change things—just make the computers bigger, more powerful, without skimping on expense. But the problem went even deeper than that. While the mastery of the hackers had indeed made computer programming a spiritual pursuit, a magical art, and while the culture of the lab was developed to the point of a technological Walden Pond, something was essentially lacking.

  The world.

  As much as the hackers tried to make their own world on the ninth floor, it could not be done. The movement of key people was inevitable. And the harsh realities of funding hit Tech Square in the seventies: ARPA, adhering to the stric
t new Mansfield Amendment passed by Congress, had to ask for specific justification for many computer projects. The unlimited funds for basic research were drying up; ARPA was pushing some pet projects like speech recognition (which would have directly increased the government’s ability to mass-monitor phone conversations abroad and at home). Minsky thought the policy was a “losing” one, and distanced the AI lab from it. But there was no longer enough money to hire anyone who showed exceptional talent for hacking. And slowly, as MIT itself became more ensconced in training students for conventional computer studies, the Institute’s attitude to computer studies shifted focus somewhat. The AI lab began to look for teachers as well as researchers, and the hackers were seldom interested in the bureaucratic hassles, social demands, and lack of hands-on machine time that came with teaching courses.

  Greenblatt was still hacking away, as was Knight, and a few newer hackers were proving themselves masters at systems work . . . but others were leaving, or gone. Now, Bill Gosper headed West. He arranged to stay on the AI lab payroll, hacking on the ninth-floor PDP-6 via the ARPAnet, but he moved to California to study the art of computer programming with Professor Donald Knuth at Stanford. He became a fixture at Louie’s, the best Chinese restaurant in Palo Alto, but was missing in action at Tech Square. He was a mercurial presence on computer terminals there but no longer a physical center of attention, draped over a chair, whispering, “Look at that,” while the 340 terminal pulsed insanely with new forms of LIFE. He was in California, and he had bought a car.

  With all these changes, some of the hackers sensed that an era was ending. “Before (in the sixties], the attitude was, ‘Here’s these new machines, let’s see what they can do.’” hacker Mike Beeler later recalled. “So we did robot arms, we parsed language, we did Spacewar . . . now we had to justify according to national goals. And (people pointed out that] some things we did were curious, but not relevant . . . we realized we’d had a Utopian situation; all this fascinating culture. There was a certain amount of isolation and lack of dissemination, spreading the word. I worried that it was all going to be lost.”

  It would not be lost. Because there was a second wave of hackers, a type of hacker who not only lived by the Hacker Ethic but saw a need to spread that gospel as widely as possible. The natural way to do this was through the power of the computer, and the time to do it was now. The computers to do it would have to be small and cheap—making the DEC minicomputers look like IBM Hulking Giants by comparison. But small and powerful computers in great numbers could truly change the world. There were people who had these visions, and they were not the likes of Gosper or Greenblatt: they were a different type of hacker, a second generation, more interested in the proliferation of computers than in hacking mystical AI applications. This second generation was hardware hackers, and the magic they would make in California would build on the cultural foundation set by the MIT hackers to spread the hacker dream throughout the land.

  Part II. Hardware Hackers: Northern California: The Seventies

  Chapter 8. Revolt in 2100

  The first public terminal of the Community Memory project was an ugly machine in a cluttered foyer on the second floor of a beat-up building in the spaciest town in the United States of America: Berkeley, California. It was inevitable that computers would come to “the people” in Berkeley. Everything else did, from gourmet food to local government. And if, in August 1973, computers were generally regarded as inhuman, unyielding, warmongering, and nonorganic, the imposition of a terminal connected to one of those Orwellian monsters in a normally good-vibes zone like the foyer outside Leopold’s Records on Durant Avenue was not necessarily a threat to anyone’s well-being. It was yet another kind of flow to go with.

  Outrageous, in a sense. Sort of a squashed piano, the height of a Fender Rhodes, with a typewriter keyboard instead of a musical one. The keyboard was protected by a cardboard box casing with a plate of glass set in its front. To touch the keys, you had to stick your hands through little holes, as if you were offering yourself for imprisonment in an electronic stockade. But the people standing by the terminal were familiar Berkeley types, with long stringy hair, jeans, T-shirts, and a demented gleam in their eyes that you would mistake for a drug reaction if you did not know them well. Those who did know them well realized that the group was high on technology. They were getting off like they had never gotten off before, dealing the hacker dream as if it were the most potent strain of sinsemilla in the Bay Area.

  The name of the group was Community Memory, and according to a handout they distributed, the terminal was “a communication system which allows people to make contact with each other on the basis of mutually expressed interests, without having to cede judgment to third parties.” The idea was to speed the flow of information in a decentralized, nonbureaucratic system. An idea born from computers, an idea executable only by computers, in this case a time-shared XDS-940 mainframe machine in the basement of a warehouse in San Francisco. By opening a hands-on computer facility to let people reach each other, a living metaphor would be created, a testament to the way computer technology could be used as guerrilla warfare for people against bureaucracies.

  Ironically, the second-floor public area outside Leopold’s, the hippest record store in the East Bay, was also the home of the musicians’ bulletin board, a wall completely plastered with notices of vegetarian singers looking for gigs, jug bands seeking Dobro players, flutists into Jethro Tull seeking songwriters with similar fixations. The old style of matchmaking. Community Memory encouraged the new. You could place your notice in the computer and wait to be instantly and precisely accessed by the person who needed it most. But it did not take Berkeley-ites long to find other uses for the terminal:

  FIND 1984, YOU SAY

  HEH, HEH, HEH . . . JUST STICK AROUND ANOTHER

  TEN YEARS

  LISTEN TO ALVIN LEE

  PART YOUR HAIR DIFFERENT

  DROP ASPIRIN

  MAKE A JOINT EFFORT

  DRIFT AWAY

  KEEP A CLEAN NOSE

  HOME {ON THE RANGE)}

  QUIT KICKING YORE HEARTS SEE ME FEEL ME

  U.S. GET OUT OF WASHINGTON

  FREE THE INDIANAPOLIS 500

  GET UP AND GET AWAY

  FALL BY THE WAYSIDE

  FLIP OUT

  STRAIGHTEN UP

  LET A SMILE BE YOUR UMBRELLA

  . . . AND . . .

  BEFORE YOU KNOW IT {}{}{}{}{}{}{}{}{}{}

  1984

  WILL

  FIND

  YOU!

  AND ITS GO’ BE RIGHTEOUS . . .

  KEYWORDS: 1894 BENWAY TLALCLATLAN INTERZONE

  2-20-74

  It was an explosion, a revolution, a body blow against the establishment, spearheaded by one demented User—userism, come to the people—who called himself Doctor Benway in tribute to a sadistically perverted character in Burroughs’ Naked Lunch. This cat Benway was taking things further than even the computer radicals at Community Memory had suspected they would go, and the computer radicals were delighted.

  None was happier than Lee Felsenstein. He was one of the founders of Community Memory and though he was not necessarily its most influential member, he was symbolic of the movement which was taking the Hacker Ethic to the streets. In the next decade, Lee Felsenstein was to promote a version of the hacker dream that would, had they known, appall Greenblatt and the Tech Square AI workers with its technological naiveté, political foundation, and willingness to spread the computer gospel through, of all things, the marketplace. But Lee Felsenstein felt he owed nothing to that first generation of hackers. He was a new breed, a scrappy, populist hardware hacker. His goal was to break computers out of the protected AI towers, up from the depths of the dungeons of corporate accounting departments, and let people discover themselves by the Hands-On Imperative. He would be joined in this struggle by others who simply hacked hardware, not for any political purpose but out of sheer delight in the activity for its own sake; these peopl
e would develop the machines and accessories through which the practice of computing would become so widespread that the very concept of it would change—it would be easier for everyone to feel the magic. Lee Felsenstein would come as close as anyone to being a field general to these rabidly anarchistic troops; but now, as a member of Community Memory, he was part of a collective effort to take the first few steps in a momentous battle that the MIT hackers had never considered worth fighting: to spread the Hacker Ethic by bringing computers to the people.

  It was Lee Felsenstein’s vision of the hacker dream, and he felt he had paid his dues in acquiring it.

  • • • • • • • •

  Lee Felsenstein’s boyhood might well have qualified him for a position among the hacker elite on the ninth floor of Tech Square. It was the same fixation with electronics, something that took hold so eerily that it defied rational explanation. Lee Felsenstein, though, would later try to give his love for electronics a rational explanation. In his reconstructions of his early years (reconstructions shaped by years of therapy), he would attribute his fascination with technology to a complex amalgam of psychological, emotional, and survival impulses—as well as the plain old Hands-On Imperative. And his peculiar circumstances guaranteed that he would become a different stripe of hacker than Kotok, Silver, Gosper, or Greenblatt.

 

‹ Prev