Book Read Free

Smart Mobs

Page 13

by Howard Rheingold


  Computers make complicated tasks easier to handle. However, some kinds of tasks always remain too complicated for state-of-the-art computing technology. Doctorow and his partners soon realized that they could create software to do what they proposed, but it would be impractical to apply it to the entire Net. “We were going to have to buy servers bigger than the entire Internet,” is how Doctorow put it. That’s where Doctorow’s passion for collecting pop culture artifacts came in.

  Ever since I was really young, I’ve collected vintage Disney theme park crap. In Toronto you’d find one piece per year at a yard sale or a thrift store. Then I found Auctionweb, which is what eBay was called at the beginning, and I found dozens of items, then thousands. I started to build long, in-depth query strings, and eventually I ended up with a 20 kilobyte long URL that I would paste into my browser at 5:00 A.M. Eastern time, the only time of night or day when their servers were idle enough to run my query. Half an hour later the host computer at Auctionweb shrank 5,000 listings to 50 that would truly interest me. Eventually I couldn’t even do that anymore, because there was no time when their servers were idle enough to run my monster query.

  I was ready to give up when I hit upon a better strategy. I started keeping a record of every person who had ever bid against me in the past and then found out what they were bidding on now. Then I would look at who was bidding against the people who were bidding against me and examine what they were bidding on now. Not only did that strategy turn out to be a great means of finding vintage Disney theme park stuff, as you can see by looking around my apartment, but it was also an amazing means of discovering stuff that I didn’t know I was looking for! I would bid on the little silver badge from the conductor’s hat on the Disney railroad, which went up to $300, and there was no way I was going to buy it for that much, but it hooked me into the bidding patterns of people who bid on vintage railroad stuff, which I found quite charming and beautiful.

  Like SETI@home, OpenCOLA requires a population of volunteers. While you put documents in a folder on your computer, waiting for similar documents to appear, others must do the same. Your client probes through your map of the network (and through the maps of those you know about) for other people’s folders (a process referred to in the search business as “spidering”), looks at the record of what these other people (peers) have accepted (by saving the files somewhere) and rejected (by discarding the files). OpenCOLA refers to these records as people’s “caches”—a file in their OpenCOLA folder where the record of their save-or-discard decisions is stored.

  Having discovered a group of peers on the network, the next thing my agent does is to spider [automatically search] whatever they have in their folders, pull in their caches, and team up with them to discover the places where they found those things. If you and I both like Wired News, our peers team up to spider Wired News periodically, discover new documents and bring them to the attention of one or the other of us, and based on what one or the other of us do, bring it to the other person’s attention or just round-file it. The last thing the OpenCOLA agent does with documents is to bring them back to my attention and observe what I do with them. When I file them, it knows that I like them; when I throw them away, it knows that I didn’t. And it either upgrades or downgrades other peers’ ability to recommend documents to me based on what I’ve done.

  Doctorow notes that the cooperative nature of the system he describes doesn’t rely on any pledges of altruism or enforced sharing. Simply looking for material and then deciding whether to keep it or not creates information that is useful to others. Each participant in the network cooperates by keeping a file of their decisions available to others, which is part of a self-interested behavior of keeping a folder open to be filled with interesting documents; keeping that folder open both invites contributions and provides information to others who seek it.

  The thing that defines peer-to-peer, I think, is the degree to which the power of the technology depends on Metcalfe’s Law. In the end, a word processing program is only a word processing program whether you’re the only user or the millionth user; its utility doesn’t change. Napster is not Napster if you’re the only user. Napster is nothing more than a folder full of MP3s if you’re the only user. Napster doesn’t tell you to share your files, but the system is arranged so that the files you have plundered are available for others to plunder during the time you have the software running so that you can plunder more files. The problem is congestion; the more users you have, the harder your network is to connect to. What a peer-to-peer network can do is provide a commons where the sheep shit grass, where every user provisions the resource he consumes.

  Grids and Ad-hocracies

  Ad-hocracies among cooperating individuals spread out across the world are not the only ways to take advantage of p2p power. Consider the idle disk space and CPU cycles on all the thousands of computers owned by a big company in a single building or worldwide. If computers were heaters, almost every computer-using enterprise is running them at full capacity and keeping the windows open, leaking energy into the air. United Devices and other commercial providers help these companies apply their own in-house computing technology to appropriate tasks, recapturing that otherwise-wasted computational potential. While voluntary virtual communities create supercomputers to cure cancer or look for messages from outer space, insurance companies crunch actuarial statistics or petroleum companies run geological simulations. Even more significantly, major corporate and government-sponsored research programs are looking at distributed processing as a new paradigm for the provision of computing power in the future. The notion of “grid computing” has attracted powerful sponsors. Several governments and corporations have started programs to create “farms” of networked computers that could provide computing resources on demand—more like the way electricity is delivered than the way computers have traditionally been marketed.

  Some criticize the movement toward grid computing as an attempt to return to the days of the mainframes, when the computer priesthood, not the users, controlled access to computer power. It wouldn’t be the first time that a voluntary grassroots movement turned into an operating division of IBM. When IBM, the bastion of the mainframe, was confronted by the invention of the PC by Xerox PARC and Apple, they decided to embrace it and mainstreamed what had been a technological counterculture by introducing their own version. When the open source movement challenged Microsoft and other purveyors of proprietary software through the cooperative efforts of distributed teams of programmers working on software that was open to all to use or modify, IBM mainstreamed the movement by spending a billion dollars to create their own open source tools, products, services, and processes.34 Microsoft has built features of grid computing into its .Net initiative, and in February 2002, IBM announced its support for open-source grid-computing platforms and proclaimed that it would “grid-enable” its existing products.35

  For years, clustering microprocessors in the same physical environment (rather than distributing them across the Net) has been the foundation of “massive parallel” approaches to creating large quantities of computing power. Other than the computers that the National Security Agency doesn’t talk about, the most powerful computers continue to be those used by major U.S. nuclear weapons research laboratories; the fastest current supercomputer is the 8,000-processor cluster at Lawrence Livermore National Laboratory, known as ASCI White.36 In 1995, the I-WAY experiment used high-speed networks to connect seventeen sites across North America to explore grid computing.37

  Perhaps the most significant news in the grid computing effort is that astrophysicist Larry Smarr has enlisted the governor of California to finance what Smarr calls “the emerging planetary supercomputer.”38 Smarr has a track record when it comes to creating as well as forecasting the next paradigm in computation. He founded the National Center for Supercomputer Applications (NCSA) in 1985. Part of the project involved finding ways to link the nation’s five supercomputing centers through high-speed Inter
net connections. In 1993, another part of the NCSA research resulted in the creation of Mosaic, the browser software that detonated the explosive growth of the Web.39 His latest sponsor, funded by $300 million in state and private financing, is the Center for Information Technology Research in the Interest of Society (CITRIS). “He imagines bridges that are covered with a fabric of computerized sensors that will automatically tell engineers where earthquake damage has occurred, or a world in which intelligent buildings whisper directions to visitors on the way to their destinations.”40 CITRIS will focus on new kinds of sensors, distributed computing software, and advanced wireless Internet.

  Like the digital computer itself, grid computing is seen as a tool for fundamental research, like the microscope, telescope, or particle accelerator. Britain is building a national grid, linking research centers from Edinburgh to Belfast. Companies reported to be experimenting with internal grids include Pfizer, Ericsson, Hitachi, BMW, Glaxo, Smith-Kline, and Unilever.41 With ad-hocracies, national defense research, and major corporations all experimenting with different approaches, it isn’t hard to forecast grid computing as the emerging paradigm in computation. What is less clear is whether some single winner or cartel of big players will dominate the scene to the point where ad-hocracies are squeezed out or marginalized or whether industrial-scale and strictly amateur p2p efforts will coexist. The legal counterattack against p2p technologies has barely begun, and its first effort, the success of the recording industry in shutting down Napster, was a stunning first strike. In 2001, a college computer technician in Georgia who contributed his school’s idle processing power to distributed.net was charged by the FBI with computer theft and trespass.42 In 2002, the technician was fined $2,100 and sentenced to a year’s probation.43 Because cable television infrastructure providers are regulated differently than telephone companies, legal observers such as Lawrence Lessig fear that broadband Internet service providers will move to block p2p activities over their parts of the Internet.44

  Peer-to-peer technologies and social contracts are reconverging with both the clouds of mobile devices that are spreading through the world and the mesh of sensors and computing devices that are increasingly embedded in the environment. In the early 1990s, the visions of “virtual reality” modeled a world where humans would explore artificial universes that would exist inside computers. Less widely reported were even wilder speculations of a world of the early twenty-first century where the computers would be built into reality, instead of the other way around.

  4

  The Era of Sentient Things

  Consider writing, perhaps the first information technology: The ability to capture a symbolic representation of spoken language for long-term storage freed information from the limits of individual memory. Today this technology is ubiquitous in industrialized countries. Not only do books, magazines and newspapers convey written information, but so do street signs, billboards, shop signs and even graffiti. Candy wrappers are covered in writing. The constant background presence of these products of “literacy technology” does not require active attention, but the information to be conveyed is ready for use at a glance. It is difficult to imagine modern life otherwise.

  Silicon-based information technology, in contrast, is far from having become part of the environment. More than 50 million personal computers have been sold, and nonetheless the computer remains largely in a world of its own. It is approachable only through complex jargon that has nothing to do with the tasks for which people actually use computers. The state of the art is perhaps analogous to the period when scribes had to know as much about making ink or baking clay as they did about writing.

  —Mark Weiser, “The Computer for the 21st Century,” 1991

  When Computers Disappear

  Scott Fisher has been putting computers on his head for as long as I’ve known him. In 1983 at the Atari Research laboratory, I watched Fisher’s group dramatize ways people might use computers in the future. Fisher pretended to put something on his head.1 Then he swiveled his head as if he were looking around. In 1990, when Fisher got his chance to build “head- mounted displays” for NASA, he invited me to stick my own face inside a computerized helmet in order to peer around “virtual reality.” Cyberspace had arrived! It turned out to look like a cartoon, but that’s another story.

  In 2001 I found myself walking around a campus outside Tokyo, my head enclosed by a helmet once again. The world I peered at this time looked almost exactly like the same one in which my body resides, not a cartoon galaxy far away. The physical world I experienced through Dr. Fisher’s latest helmet, however, had a few features reality never included before. Instead of substituting a virtual model in place of the physical world, the twenty-first-century version added information to the physical world.

  I walked up to a real tree at Fisher’s test site. If I had kept walking, I’d have bumped into a branch. An icon hovered in the air at eye level next to the tree trunk, like a tiny fluorescent UFO. I pointed my mobile telephone at the icon. A picture of Scott Fisher and the words, “Hello, Howard!” appeared. The text message floated in space as if it were projected on a transparent screen. Fisher left this message for me at this tree yesterday, sending it from his home computer in Tokyo. He explained that I could have read an explanation of some aspect of the tree, examined the tree’s hidden roots, even looked at a recent satellite image of the field I was stumbling around.

  In 1991, the artificial world I explored was a three-dimensional computer graphic simulation I could navigate (carefully, because I was blind to the external world) and manipulate by way of a computerized glove. In contrast, Fisher’s 2001 foray into “wearable environmental media” was an example of “augmented reality”—one of many current efforts to mingle virtual and physical worlds. Other investigators I visited at IBM’s Almaden laboratory in California, MIT Media Lab in Cambridge, Sony’s Tokyo Computer Science Laboratory, and Ericsson’s wireless lab outside Stockholm used mobile phones, digital jewelry, physical icons, and other technologies for combining bits and atoms, digital personae and physical places.

  Different lines of research and development that have progressed slowly for decades are accelerating now because sufficient computation and communication capabilities recently became affordable. These projects originated in different fields but are converging on the same boundary between artificial and natural worlds. The vectors of this research include the following:

  Information in places: media linked to location

  Smart rooms: environments that sense inhabitants and respond to them

  Digital cities: adding information capabilities to urban places

  Sentient objects: adding information and communication to physical objects

  Tangible bits: manipulating the virtual world by manipulating physical objects

  Wearable computers: sensing, computing, and communicating gear worn as clothing

  Information and communication technologies are starting to invade the physical world, a trend that hasn’t yet begun to climb the hockey stick growth curve. Shards of sentient silicon will be inside boxtops and dashboards, pens, street corners, bus stops, money, most things that are manufactured or built, within the next ten years. These technologies are “sentient” not because embedded chips can reason but because they can sense, receive, store, and transmit information. Some of these cheap chips sense where they are. The cost of a global positioning system chip capable of tracking its location via satellite to an accuracy of ten to fifteen meters is around $15 and dropping.2

  Odd new things become possible. Shirt labels gain the power to disclose what airplanes, trucks, and ships carried it, what substances compose it, and the URL of a webcam in the factory where the shirt was manufactured. Things tell you where they are. Places can be haunted by intergenerational messages. Virtual graffiti on books and bars becomes available to those who have the password.

  Radio, infrared, and other invisible signaling technologies already enable chips to transfer infor
mation to other people and to devices elsewhere in the room or on the other side of the world. Cheap sensors are learning how to self-organize on bodies, in buildings, across cities, and worldwide via wireless networks. The first conference on “Sensor Networks for Healthcare, the Environment, and Homeland Defense” was held in 2002. 3 There are already more than 200 billion chips in the world. The next 200 billion chips will be able to talk to each other and to us. As the president of Bell Labs said in a 2000 speech, “When your children become roughly your age . . . a mega-network of networks will enfold the entire earth like a communication skin. As communication becomes faster, smaller, cheaper and smarter in the next millennium, this skin, fed by a constant stream of information will . . . include millions of electronic measuring devices all monitoring cities, roadways, and the environment.”4 In February 2002, the Chief Technology Officer of Intel announced that in the near future, Intel will include radio transponder circuitry in every chip Intel manufactures.5

  Watch smart mobs emerge when millions of people use location-aware mobile communication devices in computation-pervaded environments. Things we hold in our hands are already speaking to things in the world. Using our telephones as remote controls is only the beginning. At the same time that the environment is growing more sentient, the device in your hand is evolving from portable to wearable. A new media sphere is emerging from this process, one that could become at least as influential, lucrative, and ubiquitous as previous media spheres opened by print, telegraphy, telephony, radio, television, and the wired Internet. Media spheres grow from technologies that provide channels for symbolic communication, commercial exchange, and group formation. Media spheres include industries and financial institutions, scientists and engineers, content providers and consumers, regulatory infrastructures, power structures, civic impacts, social networks, and new ways of thinking.

 

‹ Prev