Smart Mobs

Home > Other > Smart Mobs > Page 30
Smart Mobs Page 30

by Howard Rheingold


  Coevolution seems to me a key word. If Heidegger, Ellul, and Weizen-baum represent the shadow aspects of technology as an extension of the brutally mechanical and exploitive part of human nature, perhaps Clark points to a complementary way of looking at the same trait. Perhaps the transaction between danger and opportunity necessitated by our tool-making nature is not a zero-sum game but a balancing act. Certainly, we wouldn’t be using personal computers with graphic interfaces to explore a worldwide network if machines that entered the world as weapons had not been repurposed by determined people who saw them as “mindware upgrades.” The first electronic digital computer was created by U.S. Department of Defense contractors to perform artillery and nuclear weapons calculations but was transformed into something else entirely by a few idealists who were convinced that computers could help people think more effectively.77

  Vannevar Bush, who commanded the scientific war effort for the United States during World War II, saw that the collective scientific enterprise around the world was creating knowledge at such a growing rate that keeping track of what we know loomed as a future problem. In a visionary article in the Atlantic Monthly in July 1945, titled “As We May Think,” Bush proposed a future technology that would help people navigate through knowledge more effectively.78 Bush planted the seminal idea that we needed to build machines to manage the knowledge amassed by our knowledge technologies.

  Early computer research, both government- and industry-sponsored, concentrated on turning the huge, computationally puny mainframe computers of the 1950s into “artificial intelligence.” Computer scientists and the first commercial computer vendors saw the technology as a brute force instrument, a pile driver for calculations, an internal-combustion engine for data processing. Licklider and Engelbart looked at the primitive computers of the early 1960s and saw how one day they could become more like alphabets or telescopes than pile drivers or accounting engines—amplifiers of human minds, not substitutes for them.

  Licklider, an MIT researcher studying bioacoustics, had the opportunity to work with the first computer that had been rigged so that the programmer could directly interact with it.79 When I interviewed Licklider in 1983, I asked him about that experience and he said, “I had a kind of religious conversion. The PDP-1 opened me up to ideas about how people and machines like this might operate in the future, but I never dreamed at first that it would ever become economically feasible to give everybody their own computer.”80 It did occur to him that these new computers were excellent candidates for the super-mechanized libraries that Vannevar Bush had prophesied. In 1960, Licklider’s article, “Man-Machine Symbiosis” envisioned computers as neither substitute nor slave but partner for human thought: “The hope is that, in not too many years, human brains and computers will be coupled together very tightly, and that the resulting partnership will think as no human being has ever thought and process data in a way not approached by the information-handling machines we know today.”81

  Licklider’s vision might have remained obscure if Sputnik had not frightened the U.S. Department of Defense into creating ARPA, the Advanced Research Projects Agency, to fund wild ideas that could leapfrog conventional research. Licklider was put in charge of ARPA’s Information Processing Technology Office in the early 1960s, where he sponsored the creation of blue sky technologies that conventional computer manufacturers weren’t interested in—the graphical interface, the personal computer, and computer networks.82 The problems to be overcome in achieving such a partnership between computers and humans were only partially a matter of building better computers and only partially a matter of learning how minds interact with information. The most important questions were not about either the brain or the technology, but about the organizational restructuring that would inevitably occur when a new way to think was introduced. As it turned out, another maverick thinker in California, Douglas Engelbart, had been pursuing exactly this problem for years.

  Engelbart, a twenty-five-year-old veteran, had been a radar operator in World War II. When he read “As We May Think,” while awaiting a ship home from the Pacific, he realized that the postwar world would be dominated by problems of unprecedented complexity. He returned from the war and started working as an engineer in what was to become Silicon Valley but at that time was the world’s largest fruit orchard. One day in 1950, while driving to work, he realized that computers might be able to display information on cathode screens the way radar did and that people could use these specially designed symbol manipulating devices to solve complex problems together. From the beginning, he saw a combination of languages, methodologies, and machines supporting new ways to think, communicate, collaborate, and learn. Much of the apparatus was social, and therefore nonmechanical. After failing to recruit support from computer science or computer manufacturers, Engelbart wrote his seminal paper, “A Conceptual Framework for the Augmentation of a Man’s Intellect,” in order to explain what he was talking about.83 Engelbart came to the attention of Licklider. ARPA sponsored a laboratory at the Stanford Research Institute (SRI), the “Augmentation Research Center,” where Engelbart and a group of hardware engineers, programmers, and psychologists who shared Engelbart’s dream started building the computer as we know it today.

  I had stayed in touch with Engelbart since I had first interviewed him in 1983. He has always been frustrated by the attention paid to the easy part of his vision, creating computers that could amplify intellectual activities, and by the lack of attention devoted to the hard part, learning how groups can “raise the IQ of organizations.” Changing old habits of thought and communication turned out to be a great deal harder than creating multimedia supercomputers and the foundations of the Internet. Engelbart was one of the first to learn how new ways to cooperate involve knowledge different from that required to design chips or write programs.

  While the computer and telecommunication industries fight trillion-dollar battles, the spirit of cooperation for the fun of it finds its own channels. After the dotcom and telecom bubbles burst, the emergence of new voluntary community resources, from SETI@home to blogging, made it clear again that the big IPO is not the only reason people decide to work together.

  Is the kind of return on contributed investment that Moore’s, Met-calfe’s, and Reed’s laws describe applicable to cooperation beyond the world of software and e-commerce? What theory or metatechnology could provide a general framework for human-technology coevolution that is not hopelessly deterministic, naively utopian, cynically selfish, or dependent on altruism beyond self-interest?

  If Ostrom, Axelrod, Foucault, Licklider, and Engelbart provide essential parts of a foundation for a new theory of cooperation amplification, Robert Wright provides a framework to fit them together. In his book, Nonzero: The Logic of Human Destiny, Wright applied to the history of civilization the same game theory that Axelrod had used to explain biological and social phenomena.84 Wright’s controversial conclusion is that humans throughout history have learned to play progressively more complex nonzero-sum games with the help of technologies like steam engines and algorithms and metatechnologies like money and constitutions. Wright avoided using the word “cooperation,” because the research he cites covers instances in which participation in non-zero-sum games is not consciously cooperative. I have used the term “smart mobs” because I believe the time is right to combine conscious cooperation, the fun kind, with the unconscious reciprocal altruism that is rooted in our genes. The technologies of mobile communication and pervasive computation could elevate to a new level the non-zero-sum game-playing Wright chronicles.

  Recall from Chapter 2 that a zero-sum game is winner-take-all. For every winner, there has to be a loser. Games like the Prisoner’s Dilemma have more subtle gradations of reward and punishment. In some non-zero-sum games, all players benefit if they cooperate. More people playing more complex non-zero-sum games create emergent effects like vibrant cities, bodies of knowledge, architectural masterpieces, marketplaces, and public health systems.
Wright wrote that “cultural evolution has pushed society through several thresholds over the past 20,000 years. And now it is pushing society through another one.”85

  The world has not and is not likely to become a happy-all-the-time, win-win enterprise. Starkly competitive zero-sum games coexist with increasingly sophisticated non-zero-sum games. We band together to bring down the big game and then fight over how to divide it. Humans did not stop committing atrocities when print literacy made science and democratic nation-states possible. Enormous suffering and huge disparities in wealth and opportunity exist, and at the same time, more people are more prosperous, healthy, and politically free than ever before. Wright’s cultural evolution is not a utopian concept, although it does offer hope that the trajectory of cultural evolution points in a generally positive direction—the more people find that they can harvest personal benefits by investing trust and practicing cooperation, the more they will invest in cooperative enterprise and help others join the venture. With the right knowledge, I believe we can catalyze this process, cultivate it, and nurture its growth. We can make a conscious effort to manage what Wright claims to be an unconscious human predilection that has driven cultural evolution.

  Certain technologies, Wright argued, can trigger human societies to reorganize at a higher level of cooperation. As an example, Wright offered the Shoshone, a Native American tribe that lived in a territory with no big game to hunt but an abundance of jackrabbits at certain times of year. Because of their stark environment, the Shoshone normally existed at a simple level of social organization, with every extended family foraging for itself. When the rabbits were running, however, the families banded together into a larger, closely coordinated group, to wield a tool too large for any one family to handle or maintain—a huge net. Working together with the net, the entire Shoshone hunting group could capture more protein per person than they could working apart. Wright declared that “the invention of such technologies—technologies that facilitate or encourage non-zero-sum interaction—is a reliable feature of cultural evolution everywhere. New technologies create new chances for positive sums. And people maneuver to seize those sums, and social structure changes as a result.”86

  Wright noted that people who interact with each other in mutually profitable ways are not always aware that they are cooperating; he cited evolutionary psychologists in asserting that unconscious underpinnings of cooperation— like affection and indignation—are rooted in genetic traits:

  Natural selection, via the evolution of “reciprocal altruism” has built into us various impulses which, however warm and mushy they may feel, are designed for the cool, practical purpose of bringing beneficial exchange.

  Among these impulses: generosity (if selective and sometimes wary); gratitude, and an attendant sense of obligation; a growing empathy for, and trust of, those who prove reliable reciprocators (also known as “friends”). These feelings, and the behaviors they fruitfully sponsor, are found in all cultures. And the reason, it appears, is that natural selection “recognized” nonzero-sum logic before people recognized it. (Even chimpanzees and bono-bos, our nearest relatives, are naturally disposed to reciprocal altruism, and neither species has yet demonstrated a firm grasp of game theory). Some degree of social structure is thus built into our genes. . . .

  In the intimate context of hunter-gatherer life, moral indignation works well as an anti-cheating technology. It leads you to withhold generosity from past nonreciprocators, thus insulating yourself from future exploitation; and all the grumbling you and others do about these cheaters leads people in general to give them the cold shoulder, so chronic cheating becomes a tough way to make a living. But as societies grow more complex, so that people exchange goods and services with people they don’t see on a regular basis (if at all), this sort of mano-a-mano indignation won’t suffice; new anti-cheating technologies are needed. And, as we’ll see, they have materialized again and again—via cultural, not genetic, evolution.87

  The cultural innovations that reorganize social interaction in light of new technologies are “social algorithms governing the uses of technology.” Wright called these social methodologies “metatechnologies.” Perhaps gossip and reputation were the metatechnologies that emerged from speech. In the Middle Ages, the metatechnologies of capitalism—currency, banking, finance, insurance—pushed the hierarchical machinery of feudal society to transform into a new way of organizing social activity: the market. “The metatechnology of capitalism then combined currency and writing to unleash unprecedented social power.”88 Wright claimed that the emerging merchant class pushed for democratic means of governance not out of pure altruism but in order to be free to buy and sell and make contracts. Throughout this process, powerful people always seek to protect and extend their power, but new technologies always create opportunities for power shifts, and at each stage from writing to the Internet, more and more power decentralizes: “I mean that new information technologies in general—not just money and writing—very often decentralize power, and this fact is not graciously conceded by the powers that be. Hence a certain amount of history’s turbulence, including some in the current era.”89

  We’re heading into more of that turbulence that Wright mentioned. The metatechnologies that could constrain the dangers of smart mob technologies and channel their power to beneficial ends are not fully formed yet. I believe we can do wonderful things together, if enough people learn how. How might a new literacy of cooperation look? Technologies and methodologies of cooperation are embryonic today, and the emergence of democratic, convivial, intelligent new social forms depends on how people ap- propriate, adopt, transform, and reshape the new media once they are out of the hands of engineers—as people always do.

  Over the next few years, will nascent smart mobs be neutralized into passive, if mobile, consumers of another centrally controlled mass medium? Or will an innovation commons flourish, in which a large number of consumers also have the power to produce? The convergence of smart mob technologies is inevitable. The way we choose to use these technologies and the way governments will allow us to use them are very much in question. Technologies of cooperation, or the ultimate disinfotainment apparatus? The next several years are a crucial and unusually malleable interregnum. Especially in this interval before the new media sphere settles into its final shape, what we know and what we do matters.

  NOTES

  Introduction

  1. The Shibuya Crossing in Tokyo, Japan, has the highest mobile phone density in the world. On weekdays an average of 190,000 people and on weekends an average of 250,000 people pass this crossing per day (Source: CCC, Tsutaya), around 1,500 people traverse at each light change, and 80 percent of them carry a mobile phone. (24 January 2002).

  2. Karlin Lillington, “Mobile but Without Direction,” Wired News, 21 September 2000, (28 January 2002).

  3. Howard Rheingold, Tools for Thought: The History and Future of Mind-Expanding Technology (New York: Simon & Schuster, 1985).

  4. Howard Rheingold, The Virtual Community: Homesteading on the Electronic Frontier (Reading, Mass.: Addison-Wesley, 1993).

  5. Arturo Bariuad, “Text Messaging Becomes a Menace in the Philippines,” Straits Times, 3 March 2001.

  6. Lisa Takeuchi Cullen, “Dialing for Dollars,” Time Magazine 157 (22), 4 June 2001, (4 February 2002). See also: Kevin Werbach, “Location-Based Computing: Wherever You Go, There You Are,” Release 1.0 18 (6), June 2000, (4 February 2002).

  7. “Japan’s Lonely Hearts Find Each Other with ‘Lovegety,’” CNN.com, 7 June 1998, (26 January 2002).

  8. Howard Rheingold, “You Got the Power,” Wired 8.08, August 2000, (29 March 2002).

>   9. See: eBay, Epinions, ; Slashdot, and Plastic, .

  10. J. Carey, “Space, Time and Communications: A Tribute to Harold Innis,” in Communication as Culture (New York: Routledge, 1989), 12.

  Chapter 1

  Epigraph: Tom Standage, “The Internet, Untethered,” Economist, 11 October 2001, http://www.economist.com/surveys/displaystory.cfm?story_id=811934. October 2001.

  1. Raritan River Akita Club Inc. (RRACI), “Hachiko,” (28 January 2002).

  2. Thomas Schelling, The Strategy of Conflict (Cambridge: Harvard University Press, 1960).

  3. Guy Debord, La Société du Spectacle (Paris: Buchet-Chastel, 1967).

  4. “Mad Wing Cyber Girl Gang Arrested,” Japan Today, 8 August 2001, (22 January 2002).

  5. Howard Rheingold, Virtual Reality (New York: Summit, 1991).

  6. Mizuko Ito, “Mobile Phones, Japanese Youth, and the Re-placement of Social Contact,” Society for the Social Studies of Science Meetings, Boston, 2001, (14 November 2001).

  7. Ibid.

  8. Ibid.

  9. Mizuko Ito et al., conversation with the author, October 2001, Tokyo.

  10. Ibid.

  11. Eija-Liisa Kasasniemi and Pirjo Rautianen, “Mobile Culture of Children and Teenagers in Finland,” in Perpetual Contact: Mobile Communication, Private Talk and Public Performance, ed. Mark Aakhus and James Katz (Cambridge: Cambridge University Press, 2002).

 

‹ Prev