Book Read Free

Smart Mobs

Page 16

by Howard Rheingold


  It isn’t feasible to put expensive RFID tags on the wide variety of objects that barcodes track. The less expensive passive tag contains a tiny coil of printed conductive ink. When the tag passes through the magnetic field of a reading device, the coil generates just enough electricity to transmit a signal a short distance—moving a coil of conductive wire through a magnetic field is precisely how a generator works. Manufacturers and others who have looked at the advantages of RFID tags believe that they will replace barcodes and revolutionize the way objects are tracked only when the price falls to around one cent, the fabled “penny tag.” At the time of this writing, the price has dropped to around fifteen cents. Vivik Subra-manian at the University of California claimed to have achieved a breakthrough in the spring of 2002, involving ink-jet printer technology and electronic inks that could print sub-penny smart tags on paper, plastic, or cloth: “Can we print a circuit on a package that when you ping it with a radio signal, it’ll reply ‘Hey, I’m a can of soup’? Just as importantly, can we do it very inexpensively?”43 The Auto-ID Center at MIT, sponsored by Procter & Gamble, UPS, the U.S. Postal System, Gillette, Johnson & Johnson, International Paper, and others for whom smart tags could mean huge cost savings is the focus of a major interdisciplinary R&D effort.44

  Used together, wireless network connections, portable computation, and tag readers make possible new applications that could change the nature of products, places, and social action. For a consumer society, the transformation of consumption may be profound. Changes in the most mundane but essential element of shopping, the label, raise political issues. Opponents of genetically engineered foods and pesticides, for example, have lobbied to require identification of those foods on their labels. Labor-rights activists have called for clothing labels that include a rating of the labor conditions of the manufacturing company or nation that produced the good. In the early days of American trade unions, battles were fought and songs were sung about “wearing the union label.” With wireless devices that can read object tags, Web services that offer particular kinds of description and warning information can be created fairly easily. When people find out how the Christian Coalition or Greenpeace rates a product or a place, the collective political power of consumers could shift in unpredictable ways.

  Could penny tags be used to promote social capital as well as consumption? Digital annotations of physical objects and places could catalyze interconnection between groups of people within a locality. Imagine a neighborhood bus stop where a number of people wait during the day but often at different times. These people may share a good deal in common but lack effective methods of communicating with one another. Associating discussion boards or Web pages with the bus stop could allow more flexible ways for people to connect with one another. A range of services like news, help wanted listings, discussions, reports of damage or crime, and goods and services for barter or sale could be provided to people while they are present in the space. Entertainment applications, including games, are easy to imagine.

  Perhaps the most intrusive near-term application of RFID tags would be “smart money” that could record where it came from, who has owned it, and what it has bought. In December 2001, the European Central Bank was reported to be working on embedding RFID tags in currency by 2005.45 Although prevention of counterfeiting is the bank’s overt motivation, the same technology could afford surveillance of individual behaviors at a scale never before imagined. American civil libertarians assert that sentient currency would violate the U.S. constitutional prohibition against unlawful search and seizure.46 In July 2001, Hitachi announced that its mu-chip, a square with sides no larger than four-tenths of a millimeter, with a radio transmitter and 128 bits of read-only memory—small enough to embed in paper money without being damaged by folding—will go on the market at around 20 yen each, or approximately 15 cents.47

  After computers disappear into the walls, they might start floating in the air. The mu-chip is approaching the size of “smart dust,” a kind of sentient object that doesn’t exist yet. Researchers at the University of California, funded by Defense Advanced Research Projects Agency (DARPA) grants, combine chips that manipulate information with “microelectromechanical systems” that can perform physical activities.48 Each “mote” combines a sensor (for pollution or nerve gas, for example) with optical transceivers that can communicate via laser beams for miles, sometimes with wings.49 The first prototype, the size of a matchbox, contained temperature, barometric pressure, and humidity sensors and more computing power than the Apollo moon lander. “There’s nothing in this thing that we can’t shrink down and put into a cubic millimeter of volume,” said UC professor Kristofer Pister.50 When motes grow small enough, they can fly or float. Flying motes might be taught to flock and swarm.

  Smart dust, like digital computers and computer networks, is a brainchild of the Pentagon, whose DARPA sponsors undoubtedly see this technology as the ultimate in invisible combat surveillance devices. Spin-offs materialize unpredictably; swarming sensors could be employed in weather prediction, nuclear reactor safety, environmental monitoring, inventory control, and food and water quality control. I wouldn’t be surprised if people found ways to turn swarming sentient micromechanical motes into cosmetics, entertainment, or pornography. People for whom pervasive computing is an abstraction will understand very clearly that the traditional barriers between information and material have changed when the air they breathe might be watching them. Computers were room-sized in the 1950s, then desktop-sized in the 1980s. Today, we’re holding powerful computing and communication systems in our hands. Next, we’ll lose sight of them if we drop them on the rug. The border between bits and atoms is where all the different disciplines of virtual reality, augmented reality, smart rooms, tangible interfaces, and wearable computing seem to be converging.

  As Neil Gershenfeld explained it to me, the first epoch of MIT’s Media Laboratory, from its foundation in 1980 to the end of the twentieth century, was about “freeing bits” from their different formats as text or audio or video or software and converging them into one digital form. The next epoch, Gershenfeld predicted, will be about “merging bits and atoms.” When I first got wind of this notion a few years ago, I didn’t connect it with the Internet or pervasive computing. I had been dropping in on Professor Hiroshi Ishii’s group for a few years. When I visited Ishii at Media Lab in 1997, he was working on something called tangible bits. Ishii was enthusiastic in 1997 about abandoning traditional ways of operating computers, such as manipulating icons on a screen, in favor of manipulating tangible objects. He called these physical-virtual objects “phicons” for physical icons. That was the first time I had observed the inclusion of part of the physical world within a virtual world.

  Media Lab is, above all, a place where people build working models of wild ideas like phicons. Ishii led me to a wide, blank table surface. At the edge of the table were several wooden objects the size of large alphabet blocks. One of them was a model of MIT’s landmark dome. I picked up the dome and put it on the table. The blank table turned into a map of the MIT campus. I moved the phicon, and the map moved. I rotated the phicon, and the map rotated. Ishii handed me a second object, which was recognizable as a model of the I. M. Peidesigned Media Lab building. I put it down on the table, and the map shifted so that both the dome and the lab were in their proper places. I shifted one, then the other phicon; the map shifted to adjust, so that both buildings were always in correct juxtaposition to the rest of the landscape.

  Media Lab research tends to aim at the technologies people will use in ten or twenty years. Sony’s Computer Science Laboratories, located off Sony Street in Sony City in Tokyo, tends to work on projects that are closer to becoming products. I visited Jun Rekimoto, the young director of the Interaction Lab, a group of forty researchers. One can carry a NaviCam handheld device around the Interaction Lab, point it at the door of a researcher’s office, and see a presentation about that researcher’s work.51 Rekimoto calls NaviCam a
“magnifying glass for augmented reality.” Instead of wearing cumbersome headgear, simply point a device at an RFID-augmented object and see or hear the information linked to the object.

  Rekimoto invited me to try the “pick and drop” method for moving data and media from one computer to another by using a chip-enhanced pen to “pick up” a virtual object from a screen and then “drop” it onto the screen of a different computer. I picked up a depiction of a Monet painting from a handheld device and dropped it onto a wall display, which displayed it at highest resolution. Rekimoto called this “the chopstick metaphor,” to contrast it with the traditional “desktop metaphor” of graphical representations of files and folders.

  Rekimoto is “interested in designing a new human computer interaction style for highly portable computers, that will be situation-aware and assistance-oriented rather than command-oriented. Using this style, a user will be able to interact with a real world that is augmented by the computer’s synthetic information. The user’s situation will be automatically recognized by applying a range of recognition methods, allowing the computer to assist the user without having to be directly instructed. Before the end of the decade, I expect that such computers will be as commonplace as today’s Walkmans, electronic hearing aids, and wristwatches.”52 Think of picking and dropping sounds, pictures, and videos among Sony cameras, MP3 players, and PCs.

  Perhaps the ultimate bits-and-atoms laboratory at MIT Media Lab is the Physics and Media Group, directed by Professor Neil Gershenfeld. Gershenfeld published a book in 1999 titled When Things Start to Think, an allusion to a Media Lab research consortium named Things That Think.53 He had stepped off a plane from India the morning we met. He had been there as part of MIT’s Digital Nations effort to apply technology to problems in the developing world. He wore scuffed white track shoes, chinos, horn-rimmed glasses, and his face seemed younger than the gray in his curly hair suggested.

  His frequent travel to India and the entire Digital Nations consortium is based on a belief that pervasive computation can provide relief to some of the more urgent problems in the world’s poorest countries. “Much of our work in India is aimed at reversing urbanization by moving opportunity closer to villages. Computers and networks can help make a difference in governance, health care, disaster recovery, educational infrastructure, and land use. But the computers need to cost less than ten dollars and should not require an electrical grid or expert support.”

  I had been eager to talk about the penny tags, but Gershenfeld wanted to talk about paintable computing. “We’re in the endgame of penny tags,” he said. “It’s now becoming an industrial problem.” He now pursues a vision of self-organizing networks of sensors and computers so inexpensive that people could literally paint them on surfaces. One of Gershenfeld’s students, William Butera, described a prototype as “an instance of several thousand copies of a single integrated circuit (IC), each the size of a large sand kernel, uniformly distributed in a semi-viscous medium and applied to a surface like paint. Each IC contains an embedded microprocessor, memory, and a wireless transceiver in a 4 mm square package, is internally clocked, and communicates locally. . . . A programming model employing a self-organizing ecology of mobile code fragments supports a variety of useful applications.”54 Imagine smart dust that knows how to organize into ad hoc networks that solve computing problems, configure the painted sur- faces as supercomputers, display screens, distributed microphones or speakers, or wireless transceivers.

  Gershenfeld’s computer screen was projected onto a large white table. He rubbed his hand lovingly across the part of the table’s surface where he projected a model of the new Media Lab building while he described how the building itself is an experiment. Gershenfeld will head the Center for Bits and Atoms to be housed in the new building, in which every switch and thermostat will have an Internet address. Paintable computing is a natural extension of Gershenfeld’s long-standing belief that “the real promise of connecting computers is to free people, by embedding the means to solve problems in the things around us.”55

  Zoom to yet another level, from the scale of the smart room, with its computationally painted walls, to the scale of the individual human body. The political implications of technical design choices stand out more clearly when computers colonize our most intimate technology—clothing and jewelry—and when people don’t sit at computers, hold technology in their hands, or even walk around inside it, but wear it. Issues arising from the design and use of wearable computing bring into high technopolitical contrast the distinctions among virtual reality, augmented reality, and mediated reality, and between smart rooms and personal sentient infomediaries.

  Wearable Computers: The Political Battleground of Pervasive Technology

  Like most of the wired world, I learned about Steve Mann, the first cyborg online, when he started webcasting everything he saw. Mann, who had been tinkering with wearable computers since he was a child, had ended up at MIT, where he had equipped himself with a helmet that enclosed his head and showed him the world through video cameras. The video feed was filtered through computers that enabled Mann to add and subtract features from the world he saw around him. Starting in 1994, wireless communications gear enabled him to beam everything he saw to a Web page. Mann’s wearable computer had many features, including access to his email and the Web, but what was remarkable was his commitment to wearing his wearable computer all the time. By now, he’s been mediating reality for most of his life.

  Mann, now a professor at the University of Toronto, has wanted to be a cyborg since he was a teen. “Cyborg” stands for “cybernetic organism,” a word coined by Manfred Clynes and Nathan Kline and popularized by the inventor of cybernetics, Norbert Wiener, to represent a merger of human and synthetic components. To many, the word and all it evokes is a chilly vision, mechanical and dehumanized, the ultimate bitter victory of technophilia at the expense of all that is humane about humans. Mann has always thought differently, and he wrote a passionate manifesto in 2001 that struck a chord with me after I had spent a year tasting augmented realities:

  Rather than smart rooms, smart cars, smart toilets, etc., I would like to put forward the notion of smart people.

  In an HI [humanistic intelligence] framework, the goal is to enhance the intelligence of the race, not just its tools. Smart people means, simply, that we should rely on human intelligence in our development of technological infrastructure rather than attempt to take the human being out of the equation. An important goal of HI is to take a first step toward a foremost principle of the Enlightenment, that of the dignity of the individual. This is accomplished, metaphorically and actually, through a prosthetic transformation of the body into a sovereign space, in effect allowing each and every one of us to control the environment that surrounds us. . . . One of the founding principles of developing technology under the HI system is that the user must be an integral part of the discourse loop. The wearable computer allows for new ways to be, not just do.56

  As a teenager in Canada, Mann talked his way into a job at a television repair shop and started wiring up portable cameras and screens. Mann’s first prototype, WearComp0, was bulky, but it quickly began to evolve into his next version, WearComp1. He took apart game machines to add joysticks, swapped in better batteries, improved the audio-video recorders and displays, and kludged together a wireless data connection. In 1982, Mann started building components and circuits in clothing. By his early twenties, cyborg wasn’t something Mann did; it was something he was. He found supportive professors at McMaster University, where he worked toward his master’s degree, perfected WearComp, and continued to live as a cyborg through the 1980s.

  In 1989, the Private Eye eyeglass display became available, projecting a virtual image onto one eye that appeared to float in space as a fifteen-inch display positioned eighteen inches away.57 In 1990, Gerald Maguire and John Ioannidis at Columbia plugged the Private Eye into a portable computer and a wireless Internet connection to create a mobile �
��student notebook.”58 Also in 1990, Andy Hopper at Olivetti’s laboratory in Cambridge, England, used infrared sensors to locate the “active badges” of users.59

  Although MIT has been the site of a great deal of work and has been the center of attention, it can be argued that the field of wearable computing is rooted in Pittsburgh, where “in 1991, 25 participants in a summer rapid prototyping course offered by the Carnegie Bosch Institute were tasked with the following problem: within one semester, design and build a functional computer which could be worn on the body. The resulting system, Vu Man, became the first of more than a dozen wearable computers to emerge from the project in the subsequent decade.”60

  That same year, Steve Mann moved to MIT as a Ph.D. student at the Media Lab. The first thing he did when he arrived was to sneak up on the roof to install antennae for his radio communications infrastructure. You would think that a person so deeply involved in computers that he had worn them since he was sixteen would find a haven at MIT and the Media Lab, but Mann’s beliefs aren’t always what you might anticipate in a self-made cyborg. He fears and scorns the motives of military and corporate sponsors:

  The vision of many of those developers working for some of our biggest and most powerful government institutions is in contrast to my original attempts to personalize and humanize technology. Which road will we go down? The road on which wearable computers create and foster independence and community interaction? Or the road on which wearable computers become part of the apparatus of electronic control we are ever more subject to and unaware of ? 61

 

‹ Prev