Book Read Free

Smart Mobs

Page 9

by Howard Rheingold


  Reciprocity, cooperation, reputation, social grooming, and social dilemmas all appear to be fundamental pieces of the smart mob puzzle. Each of these biological and social phenomena can be affected by, and can affect, communication behaviors and practices. Prisoner’s Dilemma and game theory are not “answers” to questions of cooperation; rather, they are tools for understanding human social dynamics. Together with CPR theory, game-theoretic and other computer-modeling approaches open windows onto the kinds of group behavior that might emerge with smart mob technologies.

  Inventing the Innovation Commons

  The most successful recent example of an artificial public good is the Internet. Microprocessors and communication networks were only the physical part of the Net’s success formula; cooperative social contracts were also built into the Net’s basic architecture. The Internet is both the result of and the enabling infrastructure for new ways of organizing collective action via communication technology. This new social contract enables the creation and maintenance of public goods, a commons for knowledge resources.

  The personal computer and the Internet would not exist as they do today without extraordinary collaborative enterprises in which acts of cooperation were as essential as microprocessors. The technologies that support tomorrow’s smart mobs were created over three decades by people who competed with each other to improve the value of their shared tools, media, and communities of practice. And for most of this era, “value” translated into “usefulness,” not price per share of stock. A brief detour into the history of personal computing and networking illuminates more than the origins of smart mob technologies; the commons that fostered technical innovations is also the fundamental social technology of smart mobs. It all started with the original hackers in the early 1960s.

  Before the word “hacker” was misappropriated to describe people who break into computer systems, the term was coined (in the early 1960s) to describe people who create computer systems. The first people to call themselves hackers were loyal to an informal social contract called “the hacker ethic.” As Steven Levy described it, this ethic included these principles:

  Access to computers should be unlimited and total.

  Always yield to the Hands-On Imperative.

  All information should be free. Mistrust authority—promote decentralization.45

  Without that ethic, there probably wouldn’t have been an Internet to commercialize. Keep in mind that although many of the characters involved in this little-known but important history were motivated by altruistic concerns, their collaboration was aimed at creating a resource that would benefit all—starting with the collaborators who created it. Like other creators of public goods, the hackers created something that they were eager to use for their own purposes.

  The Internet was deliberately designed by hackers to be an innovation commons, a laboratory for collaboratively creating better technologies. They knew that some community of hackers in the future would know more about networks than the original creators, so the designers of the Internet took care to avoid technical obstacles to future innovation.46 The creation of the Internet was a community enterprise, and the media that the original hackers created were meant to support communities of creators.47 To this end, several of the most essential software programs that make the Internet possible are not owned by any commercial enterprise— a hybrid of intellectual property and public good, invented by hackers.

  The foundations of the Internet were created by the community of creators as a gift to the community of users. In the 1960s, the community of users was the same as the community of creators, so self-interest and public goods were identical, but hackers foresaw a day when their tools would be used by a wider population.48 Understanding the hacker ethic and the way in which the Internet was built to function as a commons are essential to forecasting where tomorrow’s technologies of cooperation might come from and what might encourage or limit their use.

  Originally, software was included with the hardware that computer manufacturers sold to customers—mainframe computers attended by special operators. Programmers were required to submit their programs to the operators in the form of punched paper cards. When technology and political necessity made it possible for programmers to work directly with computers, an explosion of innovation occurred.

  Credit Sputnik for the way computers changed. In 1957, motivated by the groundbreaking entry of Soviet technology in orbit, the U.S. Department of Defense created the Advanced Research Projects Agency. ARPA hired an MIT professor by the name of J.C.R. Licklider to lead an effort to leapfrog over existing computer technology. ARPA contractors created software that would display the results of computations as graphical displays on screens instead of printouts. Most importantly, they created software “operating systems” that enabled the community of programmers/ users to interact directly with computers.

  An operating system (OS) coordinates the interaction between a computer’s hardware and application software. Early interactive operating systems were known as “time-sharing” systems because they took advantage of the speed of electronic computation to divide the computer’s “attention” among groups of programmers. The computer’s processor would switch between each user for a fraction of a second, giving each user the impression that he or she was the sole user. Because they were connected to the same computer, programmers working on ARPA projects quickly developed a sense of community. They started inventing ways to send each other messages from their individual terminals through the shared computer. Email and virtual communities are both rooted in the ancestral “hacks” the time-sharing programmers created to communicate among themselves.

  The bill for these innovations was paid by ARPA grants. The hackers created tools for one another, competing to share the best hacks with the community, giving American taxpayers and the rest of the world an astonishing return on investment. At MIT in the early 1960s, inventing interactive computing was a collective enterprise. Essential programs were stored on punched paper tape and kept in an unlocked drawer; any hacker could use the program, and if he found a better way to do what the program was intended to do, he would revise the program, change the tape, and put it back in the drawer.49

  In the late 1960s and early 1970s, several developments set off the next frenzy of innovation. Licklider and others started planning an “intergalactic network” to connect the geographically scattered ARPA computing centers. 50 From the beginning, the network’s architects knew they were creating a communication medium as well as a means of connecting remote computers.51 By the mid-1970s, government laboratories and big corporations were joined by a new player in the computer game: teenage hobbyists. In 1974, the Altair, the first personal computer kit, became available, and “homebrewing computing” hobbyists began meeting in Palo Alto. The Homebrew Computer Club received a famous letter in 1976 from twenty-one- year-old Bill Gates, complaining that homebrewers were using the programming tool that his new company, Microsoft, had created for the Altair without paying him for it.52 Software, Gates declared, was not a public good you kept in a drawer, tinkered with, and shared; it was private property. Bill Gates stuck by his declaration, and by the 1990s he had become the world’s richest man by selling the operating system used by 90 percent of the desktop computers in the world.

  In 1969, AT&T Bell Labs pulled out of ARPA’s Multics operating system project, and several Bell Labs programmers who missed the sense of community started working on their own unofficial OS project. Programmer Ken Thompson created a game on a small computer that had come into his hands, in the process writing a “kernel” that would end up growing into the OS that collaborator Brian Kernighan named Unix in 1970. The name was a pun on the abandoned Multics project.53 The Unix creators made their source code publicly available to other programmers and invited collaboration in creating software that could make Unix more useful, a decision that gave birth to a whole new way of developing software. Computer software is distributed for use in the form of “ob
ject code,” a translation of the original (“source”) program into a human-unreadable but machine-executable collection of zeroes and ones. By distributing the source code, the Unix creators made it possible for other programmers to understand how the software works and to make their own modifications—harking back to the days of the paper tape in the unlocked drawer. Ken Thompson started duplicating Unix source code and utilities on magnetic tapes, labeling and documenting them with the words “Love, Ken,” and mailing the tapes to friends.54

  Unix software became the OS of the Net. In turn, the Internet created a rich environment for Unix programmers to establish one of the earliest global virtual communities. Dennis Ritchie, one of the Unix creators, wrote: “What we wanted to preserve was not just a good environment in which to do programming, but a system around which a fellowship could form. We knew from experience that the essence of communal computing, as supplied by remote-access, time-shared machines, is not just to type programs into a terminal instead of a keypunch, but to encourage close communication.”55

  However, in 1976, AT&T halted publication of Unix source code; the original, eventually banned, books became “possibly the most photocopied works in computing history.”56 At around the same time the Unix community was coalescing, MIT’s Artificial Intelligence research laboratory changed the kind of computers it used. This was a blow to the MIT hacker culture, because their software tools were rendered useless. At the same time, many of the early AI researchers were leaving for private industry to get involved in the techno-bubble of the time, the commercial AI boom and eventual bust. One holdout at MIT, deprived of his beloved programming environment, resistant to the commercialization of what he considered public property by AT&T and Microsoft, was Richard Stallman.

  Stallman vowed to write an OS that would be as portable and open as Unix, but which would be licensed in a way that would maintain its status as public goods. Stallman, founder of the Free Software Foundation, started creating GNU—a recursive acronym that stands for “GNU’s Not Unix.” Stallman, who owns little property and has no home other than his office, devoted himself thereafter to what he called “free software” (and emphasized that he meant “free as in free speech, not free beer”).57

  Stallman hacked the legalities of the copyright system as well as created the first source code for a free OS. He released the software he created with a license known as the GPL (General Public License). The GNU GPL enables others to copy, distribute, and make changes to software, as long as innovators don’t prevent others from doing the same thing. Stall-man called the new kind of license “Copyleft.”58 Like the paper tape in a drawer at MIT, GPL software is free for anyone to use, and anyone is free to build on it, but only if they keep the source code of the software open for others to use and improve.

  Creating an operating system is not a simple enterprise. By 1991, GNU was a complete OS, with the exception of its most essential part, known as the kernel. Linus Torvalds, a student at the University of Helsinki, started to write his own kernel. Based on GNU, all of Torvalds’s code was open according to the GPL, and Torvalds took the fateful step of posting his work to the Net and asking others for help. The kernel, known as Linux, drew hundreds, then thousands of young programmers. By the 1990s, opposition to the monolithic domination of the computer operating system market by Microsoft became a motivating factor for rebellious young programmers who had taken up the torch of the hacker ethic.

  “Open source” refers to software, but it also refers to a method for developing software and a philosophy of how to maintain a public good. Eric Raymond wrote about the difference between “cathedral and bazaar” approaches to complex software development:

  The most important feature of Linux, however, was not technical but sociological. Until the Linux development, everyone believed that any software as complex as an operating system had to be developed in a carefully coordinated way by a relatively small, tightly knit group of people. This model was and is typical of both commercial software and the great freeware cathedrals. . . . Linux evolved in a completely different way. From nearly the beginning, it was rather casually hacked on by huge numbers of volunteers coordinating only through the Internet. Quality was maintained not by rigid standards or autocracy but by the naively simple strategy of releasing every week and getting feedback from hundreds of users within days, creating a sort of rapid Darwinian selection on the mutations introduced by developers.59

  Software deliberately created as a public good is the reason you can type www.smartmobs.com instead of a string of numbers to see this book’s Web site; the Internet’s “domain name” system depends on BIND software, probably the most widely used software that nobody owns and everybody uses.60 When it was time for the ARPAnet to grow into a network of networks, the programming wizards who created the Internet’s fundamental protocols understood that decisions they made about this software would affect future generations of innovators. They created the first protocols for sending data around the network in a way that had profound social effects: “The basic argument is that, as a first principle, certain required end-to-end functions can only be performed correctly by the end-systems themselves. . . . The network’s job is to transmit datagrams as efficiently and flexibly as possible. Everything else should be done at the fringes.”61 (Think of a “datagram” as a little chunk of content that has an address on it.)

  By adhering to one of the principles Ostrom had recognized—in complex social systems, the levels of governance should nest within each other—Internet architects hit upon the “end-to-end” principle that allows individual innovators, not the controllers of the network, to decide what to build on the Internet’s capabilities.62 When Tim Berners-Lee created World Wide Web software at a physics laboratory in Geneva, he didn’t have to get permission to change the way the Internet works, because the computers that are connected (the “fringes”), not a central network, is where the Internet changes. Berners-Lee simply wrote a program that worked with the Internet’s protocols and evangelized a group of colleagues to start creating Web sites; the Web spread by infection, not fiat.63

  In 1993, Marc Andreesen and other programmers at the U.S. National Center for Supercomputing Applications (NCSA) released Mosaic, the “browser” software that made the Web accessible through a point-and-click interface. Key Mosaic programmers moved from NCSA, a public institution that puts its software into the public domain, to Netscape, Inc., which “closed” the browser code. Marc Andreesen became a zillionaire when Netscape went public in 1994. As the Internet industry skyrocketed from nowhere to “the greatest legal accumulation of wealth in history,”64 the Web was also emerging as a noncommercial effort by programmers who had not been born when the ARPAnet was invented. Volunteers started exchanging software to improve the Web server that NCSA programmers had created. Just as the browser is the software used to navigate the Web, the Web server is the software used to publish information on the Web. These volunteer programmers agreed that keeping free, open-source Web server software available was key to maintaining the spirit of innovation.

  Brian Behlendorf cofounded the virtual community of volunteers who maintain the open-source software that still powers 60 percent of the Web servers in the world. Because the earliest noncommercial Web server software required many “patches”—additional software added to a program to fix a bug—Behlendorf organized an online coalition of programmers to share patches. Because it was a “patchy” program, they decided to call the software Apache. He’s now the CEO of Collabnet, one of the rare surviving dotcoms that uses open-source methods for commercial software development. In 1998, IBM based its e-business product line on Apache and subsequently announced a billion-dollar budget to support open-source software development.

  Perhaps the largest incubator of online social networks and the oldest global virtual community, Usenet, is also an example of a gigantic long-functioning anarchy—a public good that exists on minimum enforcement of cooperation. In 1979, Duke University grad stud
ents Jim Ellis and Tom Tr-uscott, and Steve Bellovin at University of North Carolina, created the first link between Duke and UNC.65 Unix-to-Unix copy protocol, a communication tool that came bundled with every copy of Unix, made it possible for computers to exchange files over telephone modem connections. Every day or hour, one computer would automatically dial the modem connected to another computer and exchange messages that had been composed by computer users at either end; each computer would relay messages that had been passed to them until they reached their destination, like a bucket brigade. This kind of public email, known as “postings,” or “posts,” is readable by anyone who subscribes to the appropriate topical interest group known as a “newsgroup.” The self-organizing global conversation network began to spread among university and industry computer centers, relaying messages around the world through ad hoc dial-up arrangements.

  To join Usenet, a computer system operator only needed to get a “feed” from another computer system that would transmit and relay messages to and from the system’s users. That single agreement to send messages back and forth in an agreed format is the extent of Usenet’s enforced cooperation. There is no central control, either technical or social. “Whatever order exists in the Usenet is the product of a delicate balance between individual freedom and collective good,” is how Marc Smith put it.66 This anarchy, now over twenty years old, became spectacularly successful after 1986, when the news feed began to propagate through Internet-linked sites with high-speed connections rather than ad hoc relay networks of dial-up connections. Usenet exchanged 151 million messages, contributed by 8.1 million unique identified users in 2000. Each day, more than 1 million messages are exchanged among more than 110,000 unique participants via 103,000 newsgroups.67

 

‹ Prev