Book Read Free

Smart Mobs

Page 19

by Howard Rheingold


  Omidyar benefited from the power of Reed’s Law (Chapter 2). eBay is a “group-forming network” that self-organizes around shared obsessions; all the collectors of Turkish railway tickets, Dickens first editions, Pez dispensers, velvet paintings, and Ming vases find each other at their appropriate auctions and form their own communities. Surprisingly, eBay reported in 1997 that only 27 out of 2 million auctions over a four-month period were considered to involve possible criminal fraud and that 99.99 percent of the auctions attracting bids were successfully completed.25 “It’s almost impossible to believe that random strangers can trade like this without more problems. Retailers have a much higher rate of shoplifting than the fraud that eBay runs into,” commented an investment analyst.26

  The overall rate of fraud is not a reflection of innate human honesty. To the contrary, the smart mobs who use eBay to their advantage are always the target of would-be smarter mobs who try to find loopholes in the system, and the attacks of the would-be smarter mobs spur the efforts of smartest mobs who build improved reputation systems to counter known forms of cheating. eBay looks for evidence of the kind of “shill bidding” that was uncovered when a seller conspired to inflate the price of a painting.27 The low rate of fraud on eBay poses a dilemma familiar to students of cooperation. Peter Kollock, a professor at UCLA who had studied virtual communities, has noted that every unsecured financial transaction is a Prisoner’s Dilemma in which each party is tempted to benefit by failing to reciprocate:

  The temptation to defect in the exchange has led to a wide range of formal and informal mechanisms for managing this risk. The simple act of meeting face-to-face for the transaction helps reduce the likelihood that one party will end up empty handed. Separating the two sides of the transaction by time or space (such as purchasing something by mail or on credit) introduces greater risks: the party who moves second must be considered trustworthy or have some other form of guarantee. The formal infrastructure that exists to manage these risks is vast and includes such elements as credit card companies, credit rating services, public accounting firms, and—if the exchange goes bad—such services as collection agencies or the court system.28

  In the Feedback Forum, eBay buyers and sellers can rate each other and comment publicly on the quality of the interaction. Each comment includes one line of text and a rating of +1 (positive), 0 (neutral), or 1 (negative). All feedback comments have to be connected to a transaction; only the seller and winning bidder can leave feedback. Buyers searching for items can see the feedback scores of the sellers. Over time, consistently honest sellers build up substantial reputation scores, which are costly to discard, guarding against the temptation to cheat buyers and adopt a new reputation.

  Paul Resnick, whose GroupLens had been a pioneering recommender system in 1992, and Richard Zeckhauser performed empirical studies on “a large data set from 1999” that indicated that despite the lack of physical presence on eBay, “trust has emerged due to the feedback or reputation system.”29 Biological theories of cooperation and experiments in game theory point to the expectation of dealing with others in future interactions— the “shadow of the future” that influences behavior in the present. Resnick et al. assert:

  Reputation systems seek to restore the shadow of the future to each transaction by creating an expectation that other people will look back on it. The connections of such people to each other may be significantly less than is the case with transactions on a town’s Main Street, but their numbers are vast in comparison. At eBay, for example, a stream of buyers interacts with the same seller. They may never buy an item from the seller again, but if they share their opinions about this seller on the Feedback Forum, a meaningful history of the seller will be constructed. . . . Through the mediation of a reputation system, assuming buyers provide and rely upon feedback, isolated interactions take on attributes of a long-term relationship. In terms of building trust, a vast boost in the quantity of information compensates for significant reduction in its quality.30

  Resnick et al. concluded that reputation systems require three properties in order to function: First, the identities of buyers and sellers must be long-lived, whether or not they are pseudonymous, in order to create an expectation of future interaction. Second, feedback about interactions and translations must be available for future inspection by others. Third, people must pay enough attention to reputation ratings to base their decisions on them. In regard to the third requirement, part of the effectiveness of eBay’s reputation system might derive from buyers’ and sellers’ belief that it works. Reputation, like surveillance, may induce people to police themselves.

  Research into reputation management systems stirred up an interdisciplinary vortex, similar to the way that the Prisoner’s Dilemma drew together mathematicians, economists, biologists, and sociologists toward a conceptual Schelling point about cooperation.31 Computer scientists who devise distributed artificial intelligence systems, in which large numbers of intercommunicating dumb units add up to hive-like emergent intelligence, use reputation as a way of controlling the behavior of the distributed agents.32 Computer security researchers use the term “web of trust” to refer to ways of authenticating people’s cryptographic keys by delivering them in person to people who will then certify the keys online by adding their digital signature, thus enabling encrypted communications to take place without a central key-certifying authority.33

  I visited an economist who is trying to create a discipline of reputation systems research. I had been drawn to the writings of Chrysanthos Dellarocas, formerly of the MIT Sloan School of Management, because he addressed one of the essential questions about the future of smart mobs: Are reputation systems useful tricks for book-buying and online auctions but ultimately incapable of mediating more complex social dilemmas? Or will reputation systems evolve into far more sophisticated social accounting systems?

  “Can reputation systems evolve? This question is the center of my research!”34 Professor Dellarocas received me in his high-rise office in New York University’s Stern School of Business. “My aim is to build a foundation in economics for a discipline of designing reputation systems,” he added.

  Like Resnick and others, Dellarocas recognized that online auctions are Prisoner’s Dilemmas: “In transactions where the buyer pays first, the seller is tempted to not provide the agreed upon goods or services or to provide them at a quality which is inferior to what was advertised to the buyer. Unless there are some other guarantees, the buyer would then be tempted to hold back on her side of the exchange as well. In such situations, the trade will never take place, and both parties will end up being worse off.”35

  Dellarocas studied the most frequent methods of cheating reputation systems. Buyers can give unfairly high ratings (“ballot stuffing”) and conspire to boost each other’s reputations (“shilling”). Buyers can give unfairly low ratings (“bad mouthing”). Sellers can provide good service to everyone except a few specific buyers they dislike or, conversely, can favor other buyers. For these fundamental vulnerabilities, Dellarocas suggests three countermeasures. Controlled anonymity, in which the system reveals reputation information to buyers and sellers but not the identity of either parties, can reduce the possibility of collusion or reprisal. Dropping exceptional scores at the high and low end and discarding scores from the most frequent reviewers can furnish further defenses against cheating.

  Although he does believe that reputation systems will grow more capable over time, Dellarocas cautioned, “We are still far from an online reputation marketplace.” He pointed out that eBay and Amazon both guard access to their reputation databases and resist attempts to transfer reputation scores from one online marketplace to another. This raises the question of who owns our reputations. Are universal reputations systems possible?

  From all the current action in reputation system theory and practice, it isn’t hard to predict that scalable, trustable, portable, easy-to-use online reputation systems will continue to evolve. One researcher I’ll
discuss in Chapter 7 (“Smart Mobs”) is even experimenting with distributed reputation systems for ad hoc wearable computer communities. These communities are in such early stages of their development that we can only speculate about how they will fit into mobile and pervasive technologies. Recent scientific discoveries about the role of reputation in evolution, social interaction, and markets offer provocative hints, however.

  Mobile, Pervasive, and Reputable

  Some of the biological and social research findings I encountered when looking into the nature of cooperation made new sense after I learned about the evolution of online reputation systems. So I went back to the sociologists, the evolutionists, and the game theorists, and lo!—reputation stood out as a single thread connecting the puzzling generosity of hunters in Tanzania, the peculiar pleasure that comes from punishing cheaters, the social function of gossip, the possibility that language evolved from grooming behavior, and the way some communities manage their commons without incurring tragedy. In each of these instances, reputation is the secret ingredient in cooperation.

  In Chapter 2, I presented the case that cooperative strategies like TIT-FOR-TAT succeed because they signal a willingness for cooperation but defend themselves against exploitation by retaliating against noncooperation. These two simple strategies, taken together, seem to explain how self-interested individuals can agree to cooperate for common benefit in a wide variety of situations. Organisms that have been observed to cooperate, from stickleback fish to vampire bats, appear to do so on a basis of reciprocation—offering mutually profitable cooperation only to partners who are willing to return the favor, punishing those who have not reciprocated in the past by refusing to cooperate with them now, otherwise known as reciprocal altruism.

  In some organisms and some human societies, individuals have been so willing to cooperate that they apparently act against their own self-interest in order to provide benefit to others. Why do antelope hunters in Tanzania and turtle fisherman off Australia expend their energy providing game for tribal feasts, even at the expense of their own families? Biologists think the answer is something called “costly signaling”: The hunters are letting others know that they are good citizens and good providers and therefore good husband and partner material.

  Anthropologist Kirsten Hawkes concluded that the Hazda hunters of Tanzania spend extra effort and take bigger risks hunting large game like giraffes that can feed the whole tribe instead of going for easier small game that could feed their own families because provisioning big game pays off in prestige, which can be translated into future political power, economic partnership, or sexual attention.36 Similarly, turtle hunters off the northeast coast of Australia provide feasts for their tribe at the expense of their time and their shares of the catch in order to send a “costly signal” that lets potential mates, allies, competitors, and hunting partners perceive their prowess and willingness to cooperate.37 Those who receive this information tend to trust it because of the cost the hunters paid to signal it. To biologists Pollock and Dugatkin, reputation evolved as a measure of an individual’s willingness to reciprocate, thereby raising the probability that the individual will be chosen as a partner in reciprocally cooperative activities like food-sharing, mating, and hunting together.38

  Spreading the word about reputation is where gossip comes in. One evolutionary biologist claims that the human brain grew large and language emerged because social grooming—taking turns picking insects out of each other’s fur—was inadequate for maintaining social bonds in groups of primates larger than fifty members. Grooming signals willingness to cooperate (literally, “you scratch my back and I’ll scratch yours”). In “Why Gossip Is Good for You,” Robin Dunbar claims that language grew out of complex social bonding between proto-human females. Whereas simpler signals and smaller brains could have remained adequate for coordinating the males’ hunting activities, Dunbar proposes they weren’t sufficient for the complicated lists of who did what to whom that could have been the basis for the original proto-human reputation system.39.

  Research reported in 2002 offers provocative theories about how reputation, altruism, and punishment are structured to support human cooperation. A field now known as “experimental economics” has extended game theory to include two specific “minigames”: the “Ultimatum Game” and the “Public Goods Game.” Research using these games as probes indicate that

  People tend to exhibit more generosity than a strategy of rational self-interest predicts.

  People will penalize cheaters, even at some expense to themselves.

  These tendencies and the emotions that accompany them influence individuals to behave in ways that benefit the group.40

  The Ultimatum Game takes place between two players who play it once and never again. The players can share the sum of money, but only if they agree on a split. A coin flip gives one player the option of determining how much of the total to keep and how much to offer the other player. The other player, the “responder,” can accept the deal and the money is split as proposed, or the second player can refuse the deal and neither player gets any money. The result that is not surprising to people who value fairness but puzzles those who see humans as rational creatures who act in their self-interest is that two-thirds of the experimental subjects offer between $40 and $50 out of $100 total. Only four in one hundred people offer less than 20 percent, and more than half of the responders reject offers smaller than 20 percent of the total.

  Why would anyone turn down 20 percent of something in exchange for nothing? Martin A. Nowak, Karl Sigmund, and Karen M. Page of the Institute for Advanced Study at Princeton propose an evolutionary model. Emotions evolved over millions of years of living in small groups. In such groups, gossip distributed information about who accepts unfair treatment and who resists it passionately. If others learn that an individual is willing to meekly settle for smaller than their fair share, they are likely to make lower offers to that individual in the future. If the responder exhibits anger at being treated unfairly (being offered $20 instead of $50, for example), then others will have an incentive to make higher offers in future trades. Reputation for being a sucker is costly, and the emotional response could be an internal model that serves to regulate cheating.

  The Public Goods game has provided a window into the role of punishment in managing common resources. Swiss researchers Ernst Fehr and Simon Gachter devised a game in which four anonymous participants had to decide how much to invest in a common pot.41 They are each given a stake to begin investing, and each can keep what he or she doesn’t invest in the pot. The amount invested by the four, each deciding without the knowledge of how the other three are going to respond, is multiplied and then divided equally among the players, with no regard to who was generous (and thus raised the pot for all at their own expense) and who was rationally stingy (and got the divided pot plus all their original stake). This game was then played in rounds, and the amount that each player invested was revealed after each round. In some of the games, players were allowed to spend part of their pool for the privilege of fining each other. In some games, the players were rotated among different groups, so that individuals did not have the opportunity to encounter each other again. Groups in which punishment was allowed resulted in more generous contributions to the common pool, but cooperation deteriorated rapidly in the absence of punishment. Even when there was no possibility of future interaction, many players punished free riders and reported that they did it because they were angry at the cheaters.42

  Punishment of free riders is “a very important force for establishing large-scale cooperation,” Fehr told the New York Times, “Every citizen is a little policeman in a sense. There are so many social norms that we follow almost unconsciously, and they are enforced by the moral outrage we expect if we were to violate them.”43 David Sloan Wilson, an evolutionary biologist, told the New York Times: “People are used to thinking of social control and moralistic aggression as forms of selfishness, and that you must be p
unishing someone for your own benefit. But if you look at the sort of punishment that promotes altruistic behavior, you see that it is itself a form of altruism. Once you think of punishment as a form of altruism, then the kind of person who doesn’t punish emerges as a kind of freeloader, too.”44

  When Elinor Ostrom looked for common characteristics of communities that managed commons without destroying them, she discovered that imposing sanctions on free riders, but doing it in a graduated manner, is key to cooperation. Self-monitoring is part of successful grassroots collaboration, a kind of many-to-many surveillance by mutual consent. If governance is to be democratic rather than Hobbesian, maintenance of social order requires technologies of mutual social control. Marc A. Smith, my cybersociology guru, applied Ostrom’s findings to his research on Usenet and speculated about the future of online reputation systems:

  Effective self-regulation relies upon sanctioning, which relies upon monitoring. If it is difficult to identify either the largest contributors or the most egregious free riders, sanctioning, whether in the form of reward or punishment, cannot function effectively. In the physical world, monitoring occurs in many ways. The mutual awareness of coworkers around common coffeepot chores or neighbors around maintaining common spaces is often constructed through casual interaction and fairly cheap monitoring and record keeping. But without the background of a social network of general awareness among neighbors, most neighborhoods become more dangerous and shabby. The widespread use of wireless digital devices means that monitoring the contributions and consumption of a common resource by potentially vast groups can be made fairly cheap and fluid.

 

‹ Prev