Book Read Free

Smart Mobs

Page 11

by Howard Rheingold


  The “cornucopia of the commons” is a consequence of Reed’s Law taking advantage of Moore’s Law. My journey into the universe of peer-to-peer ad-hocracies that combine the powers of computation with the growth capabilities of online social networks started innocently enough, when I stumbled onto a plot to find life in outer space.

  3

  Computation Nations and Swarm Supercomputers

  Peer-to-peer networks are composed of personal computers tied together with consumer Internet connections, each node a quantum zone of uncertainty, prone to going offline whenever its owner closes his laptop and chucks it into a shoulder-bag. . . . Peer-to-peer networks aren’t owned by any central authority, nor can they be controlled, killed, or broken by a central authority. Companies and concerns may program and release software for peer-to-peer networking, but the networks that emerge are owned by everyone and no one.

  They’re faery infrastructure, networks whose maps form weird n-dimensional topologies of surpassing beauty and chaos; mad technological hairballs run by ad-hocracies whose members each act in their own best interests.

  In a nutshell, peer-to-peer technology is goddamn wicked. It’s esoteric. It’s unstoppable. It’s way, way cool.

  —Cory Doctorow, “The Gnomes of San Jose”

  ETs, Worms, and ’Zillas

  I stumbled into my first peer-to peer ad-hocracy when I visited a friend’s office in San Francisco one night in 1999. It was a quarter past midnight during the peak of the dotcom era, which meant that the crew was going full blast at the witching hour. Nevertheless, I couldn’t help noticing that the screens on the few unoccupied desks in the block-square geek farm seemed to be talking to each other. Animated graphical displays danced in bright colors on dozens of monitors.

  When he noticed what I was noticing, my friend explained that the computers were banding together. When nobody was using them, the PCs were swarming with other computers around the world in an amateur cooperative venture known as SETI@home—a collective supercomputer spread all over the Net.

  “What are they computing?” I asked.

  They’re searching for extraterrestrial communications,” he replied. He wasn’t kidding.

  Community computation, also known as “distributed processing” or “peer-to-peer (p2p)” computing, had already been underway for years before Napster evoked the wrath of the recording industry with this new way of using networked computers. Whereas Napster enabled people to trade music by sharing their computer memory—their disk space—distributed processing communities share central processing unit (CPU) computation cycles, the fundamental unit of computing power. Sharing disk space does no more than enable people to pool and exchange data, whether it is in the form of music or signals from radio telescopes. CPU cycles, unlike disk space, have the power to compute—which translates into the power to analyze, simulate, calculate, search, sift, recognize, render, predict, communicate, and control. By the spring of 2000, millions of people participating in SETI@home were contributing their PCs’ processors to crunch radio astronomy data.1 They did it voluntarily, because finding life in outer space would be “way, way cool.” And perhaps because cooperating on that scale is a thrill. The thrill made even more sense when I learned that all the computers in this office were part of a team, competing and cooperating with other geek farms around the world to contribute computations to the group effort.

  Keep one thing in mind as we travel through the p2p universe: A great deal of peer-to-peer technology was created for fun—the same reason the PC and the Web first emerged from communities of amateur enthusiasts. When the suits, the bucks, and the corporations move in, the noncommercial and cooperative origins of technologies tend to be forgotten. Yet, venture capitalists would never have paid attention to the Web in the first place if a million people had not created Web pages because it was a cool thing to do (i.e., the creators would gain prestige among their peers) and because a little bit of cooperation can create resources useful to everyone. It’s the same old hacker intoxication of getting a buzz from giving tools away and then coming back to find that someone else has made the tool even more useful.

  The power of peer-to-peer methodology is a human social power, not a mechanical one, rooted in the kind of passion that enthusiasts like Cory Doctorow demonstrate when he says: “In a nutshell, peer-to-peer technology is goddamn wicked. It’s esoteric. It’s unstoppable. It’s way, way cool.” Although Doctorow hadn’t been born when system administrators started receiving tapes in the mail, labeled “Love, Ken,” he was expressing the same spirit that drove Unix and the creation of the Internet and the Web. People don’t just participate in p2p—they believe in it. Hardware and software make it possible, but peer-to-peer technology is potent because it grows from the collective actions of large numbers of people. Like Cory, some people grow passionate about this kind of technology-assisted cooperation. The people who created the Web, and before that, the Internet and the PC, knew that passion. It’s what author Robert Wright calls “nonzero-sumness”—the unique human power and pleasure that comes from doing something that enriches everyone, a game where nobody has to lose for everyone to win.2

  Today, millions of people and their PCs are not just looking for messages from outer space and trading music but tackling cancer research, finding prime numbers, rendering films, forecasting weather, designing synthetic drugs by running simulations on billions of possible molecules—taking on computing problems so massive that scientists have not heretofore considered them.

  Distributed processing takes advantage of a huge and long-overlooked source of power.3 It is a kind of technical windfall. In a sense it’s found energy, analogous to the energy savings that come from building more efficient appliances and better insulated buildings. Computation power can be multiplied, without building any new computers, simply by harvesting a resource that until now had been squandered—the differential between human and electronic processing speeds.

  If you type two characters per second on your keyboard, you’re using a miniscule fraction of your machine’s power. During that second, most desktop computers can simultaneously perform hundreds of millions of additional operations. Time-sharing computers of the 1960s exploited this ability. Now, millions of PCs around the world, each one of them thousands of times more powerful than the time-sharing mainframes of the ’60s, connect via the Internet. As the individual computers participating in online swarms become more numerous and powerful and the speed of information transfer among them increases, an expansion of raw computing power looms, an expansion of such magnitude that it will certainly make possible qualitative changes in the way people use computers.

  Peer-to-peer sociotechnical cooperatives amplify the power of the other parts of the smart mobs puzzle. Peer-to-peer collectives, pervasive computing, social networks, and mobile communications multiply each other’s effects: Not only are millions of people now linking their social networks through mobile communication devices, but the computing chips inside those mobile devices are growing capable of communicating with radio-linked chips embedded in the environment. Expect startling social effects when the 1,500 people who walk across Shibuya Crossing at every light change can become a temporary cloud of distributed computing power.

  In the summer of 2000, I visited David P. Anderson, technical instigator of the Search for Extraterrestrial Intelligence (SETI) project. I knew I had arrived at the right place when I spotted the WELCOME ALL SPECIES doormat. The University of California Space Sciences Laboratory in the Berkeley Hills is still the mother ship of community computation, nerve center of the largest cooperative computing effort in the world.

  Search for Extraterrestrial Intelligence (SETI) is a privately funded scientific examination of extraterrestrial radio signals in search of messages from alien civilizations. More than 2 million people worldwide donate untapped CPU time on their PCs to analyze signals collected by a radio telescope in Puerto Rico. The telescope pulls down about 50 billion bytes of data per day, far m
ore than SETI’s servers can analyze. That’s where community computing comes in. SETI@home participants install client software (a program they download from the Net and run on their home computer; the client communicates automatically with the central “server” computer in Berkeley). The client software downloads a small segment of radio telescope signals and processes it, looking for interesting patterns consistent with intelligent life. When the task is complete, the program uploads the results to SETI@home headquarters and collects a new chunk of digitized space signal to search. When the computer’s user logs into the machine, the SETI@home client goes dormant, awakening again when the human user pauses for more than a few minutes.

  It was a sunny day, so Anderson and I sat on a terrace outside the Space Sciences Laboratory. The California hills had turned summer tawny. We could smell the eucalyptus forest on the hills below us. Behind Anderson, I could see a panoramic view of San Francisco Bay. If I worked in this building, I would take as many meetings as possible on the terrace. Anderson, tall, dark-haired, with the lank and sinew of a long-distance runner, takes his time thinking about a response and then tends to speak in perfectly formed paragraphs.

  I asked him how SETI@home started. “In 1995,” Anderson recalled, “I was contacted by a former Berkeley grad student named David Gedye. Inspired by documentaries about the Apollo moon landing, an event that made people all over the world feel that human beings were taking a collective step forward, Gedye wondered what contemporary project today might have a similar impact and hit upon the idea of harnessing the public’s fascination with both the Internet and the SETI program.”

  In mid-1999, SETI@home clients were made available online for free downloading. “It’s been a wild ride since then,” says Anderson. “We were hoping for at least 100,000 people worldwide to get enough computer power to make the thing worthwhile. After a week, we had 200,000 participants, after four or five months it broke through a million, and now it’s past 2 million.”4

  Although SETI@home put distributed computing on the map, it wasn’t the first such attempt to link computers into a cooperating network. In the early 1980s, I searched for the future in the library of the Xerox Palo Alto Research Center. Some of the most interesting reading was in the distinctive blue-and-white bound documents of PARC research reports. I wasn’t technically knowledgeable enough to understand most of them, but one of them, written in largely nontechnical English, had an intriguing title, “Notes on the ‘Worm’ Programs—Some Early Experience with a Distributed Computation,” by John F. Shoch and Jon A. Hupp.5 The report was about experiments with a computer program that traveled from machine to machine on a local network, looking for idle CPUs, sneaking in computations when the processor was not in use, and then retreating to the mother ship with the results when humans started using the machines.

  I was intrigued by the authors’ acknowledgment that they were inspired by a 1975 science fiction novel: “In his book The Shockwave Rider, John Brunner developed the notion of an omnipotent ‘tapeworm’ program running loose through a network of computers—an idea which may seem rather disturbing, but which is also quite beyond our current capabilities. Yet the basic model is a very provocative one: a program or computation that can move from machine to machine, commandeering resources as needed, and replicating itself when necessary.”6

  It took decades for the telecommunication pipelines that linked computers to become fast enough, and for the computer processors to become powerful enough, to enable truly useful distributed computation power. In 1985, Miron Livny proposed that idle workstations could be used for distributed work.7 A few years later, Richard Crandall, now Distinguished Scientist at Apple, started testing gargantuan prime numbers with networked NeXT computers.

  “One day at NeXT engineering headquarters,” Crandall recalled when I talked with him in 2000, “I looked at these idle computers, and it occurred to me that machines have no business sleeping. I installed software that allowed the computers to perform computations when machines were idle and to combine their efforts across the network. I called it Godzilla. But we got a legal warning from the company that owned the rights to the name Godzilla. So we renamed it ’Zilla.”8

  Crandall wanted to work on a specific task: searching for very large prime numbers. Crandall and two colleagues completed the deepest computation ever performed in order to answer a yes-or-no question: Is the 24th Fermat number (which has more than 5 million digits) prime?9 “It took 100 quadrillion machine operations,” Crandall proudly estimates. “That’s approximately the same amount of computation Pixar required to render their computer-animated feature film A Bug’s Life. With that level of computational effort you can create a full-length movie or get a yes or no answer about an interesting number.” Number theory, he asserted, has a history of surfacing ideas that are interesting only to contemporary mathematicians but then turn out to be essential to some practical problem a few centuries later. I later discovered that Crandall’s interest in prime numbers had led to his patent for an algorithm that Apple uses for encryption.10

  One classic example of a computationally intense problem is computer weather simulation. In addition to being technically difficult, weather simulation has become an important tool in the highly charged political debate surrounding global warming and other human-initiated kinds of climate change. Myles R. Allen of the Rutherford Appleton Laboratory in Chilton, England, proposed applying distributed computation to climate simulation.11 Allen appealed to a sense of civic spirit among those who read his Web site: “This experiment would introduce an entirely new form of climate prediction: a fuzzy prediction, reflecting the range of risks and probabilities, rather than a single ‘best guess’ forecast. And we don’t have the computing resources to do this any other way. So, if you’re lucky enough to have a powerful PC on your desk or at home, we’re asking you to do your bit so the right decisions get taken on climate change.” Allen received 15,000 replies within two weeks.

  On their Web site, Allen and colleagues explain their objectives and methodology:

  Predictions of climate change are made using complex computer models of the ocean and atmosphere of the Earth. Uncertainties arise in these predictions because of the interactions between physical processes occurring on many different scales (from the molecular to the planetary). The only systematic way to estimate future climate change is to run hundreds of thousands of state-of-the-art climate models with slightly different physics in order to represent uncertainties. This technique, known as ensemble forecasting, requires an enormous amount of computing power, far beyond the currently available resources of cutting-edge supercomputers. The only practical solution is to appeal to distributed computing which combines the power of thousands of ordinary PCs, each PC tackling one small but key part of the global problem!12

  Another category of difficult problem has direct appeal to people who don’t care about giant prime numbers or life in outer space but would highly appreciate a new medicine. Creating new synthetic medicines for a spectrum of diseases, including AIDS and cancer, requires three-dimensional modeling of the ways complex molecules fit or fold together. With very large numbers of possible molecules to simulate, multiplied by very large numbers of ways they can assume shapes, sifting the possible molecules for promising pharmaceuticals has been prohibitively slow. A variety of voluntary and for-profit distributed computation enterprises are addressing the computational needs of “rational drug design.”

  SETI@home instigator David Anderson became Chief Technology Officer of a for-profit enterprise, United Devices, which offers incentives such as frequent flier miles and sweepstakes prizes to individuals who become members and supply CPU cycles to corporations and research facilities. 13 Chip maker Intel sponsors a “philanthropic peer-to-peer” program. United Devices, together with the National Foundation for Cancer Research and the University of Oxford, enables participants to contribute their CPU cycles to drug optimization computations involved in evaluating potential leukemia medici
nes from Oxford’s database of 250 million candidate molecules.14 Whereas Intel’s first supercomputer, built in the 1990s for Sandia National Laboratory at a cost of $40$50 million, is capable of one teraflop (one trillion floating point operations), the United Devices virtual supercomputer is aiming for fifty teraflops “at almost no cost.”15 In 2002, with the help of 1.35 million PC users who had joined the United Devices effort, an Oxford University team searched through 3.5 million potential anthrax-treating compounds and came up with 300,000 possible new drugs. “We managed to search the complete dataset in just four weeks instead of years,” one of the researchers noted. “Having that big set to start with means we’ve come up with drug compounds that the pharmaceutical companies would never have thought of.”16

  As of 2002, a rainbow of distributed computation efforts were underway. An incomplete list includes the following:

  Entropia (http://www.entropia.com), a commercial enterprise like United Devices, provides computing cycles for life sciences research and more mundane applications such as financial and accounting calculations.

  Folderol (http://www.folderol.com) uses human genome data and volunteers to put medically crucial protein-folding computations in the public domain.

  Distributed.net (http://www.distributed.net), according to instigator David McNett, started out as “a loose coalition of geeks that came together in 1997 to crack one of RSA corporation’s encryption techniques.” This virtual supercomputer has succeeded in solving cryptographic challenges—an important part of determining whether e-commerce schemes are sound—and has become a linchpin in the provision of personal privacy and national security.

 

‹ Prev