When we gossip, we establish how others behaved, even if we weren’t there to witness it directly. We have, in a sense, access to the eyes and ears and experiences of our entire social network, making our functional network far vaster than the 150 people we call friends. We can say who behaved well and who behaved badly. Who was honorable and who was deceitful. Who is to be trusted, who to be feared, who to be avoided, who to be embraced. In establishing these basic facts, we become able to punish people who deviate from acceptable behavioral norms—like, for instance, those who would come forward against the Drake fortune and in so doing “jeopardize” the whole enterprise. Hartzell couldn’t have been any clearer on this point in setting up his con; any press, and the whole fortune might vanish.
The whole world doesn’t in fact lie, cheat, and steal, and life isn’t nasty, brutish, and short, because we know that others will know how we act and that we can suffer for it. We care what they think—and what they think can impact how we fare later on. Without social information sharing, the ability to gossip and form consensus around acceptable behaviors and sanctions, society would devolve quickly into a mass of people who take advantage of one another.
In 1997, Dunbar and his colleagues did something we’re taught from a young age to avoid: they eavesdropped. In university cafeterias, in bars, on trains, they discreetly (we hope) sat back as people went about their conversations. They focused specifically on those conversations that seemed to be between friends and that appeared relaxed and informal. Every thirty seconds, the eavesdropper would note the general topic of conversation, condensed into broad categories such as “technical/instructional” (someone explaining how, say, the election process functioned or a car engine works), “work/academic” (complaining about a class or pesky meeting), “sport/leisure” (that lousy Knicks game . . .), and so on.
When the researchers analyzed the topics that had been covered, they discovered a surprisingly consistent pattern. It didn’t matter who was doing the talking or where it was taking place, whether you were younger or older, male or female, clearly in school or clearly not: over 65 percent of every conversation was taken up with social topics—for the most part, discussing others’ behavior and analyzing your own relative merits, or how others acted and what kind of a person you are. That means that every other topic combined—work, school, sports, culture, art, music, and everything else—is only one third of our typical conversation. Everything else is, in one way or another, concerned with reputation, others’ and our own. In some cultures, that percentage appears to be even higher. In one study of Zinacantán Indians in Mexico, social topics occupied 78 percent of nearly two thousand recorded conversations.
In 1994, economist (and future Nobel laureate) Elinor Ostrom had people make investments in two separate markets as they sat in front of their computer. As they made their bids, they could see what everyone else was doing as well—but not who those people actually were. One market promised a constant return no matter what. In the other, however, returns grew as a function of the number of investors. Which market would people choose to invest in, if they could choose only one?
It’s a typical dilemma of the commons—the types of potential conflicts that Ostrom specialized in. (The name originally derives from the problem of sheep grazing on communal grass. If only one sheep grazes, all is well, but if everyone who owns a sheep lets it loose, the commons are depleted and there’s nothing left for anyone.) If everyone invests in the second market, the group will reach its maximum payout and each person will go home richer. But if some people opt for the sure-thing steady payoff, that return ends up being higher than that of the no longer optimal second market. Ostrom found that, left completely to their own devices, people weren’t all too certain of the anonymous others in the group. And so, many of them flocked to the first, steady market. As a result, everyone ended up making, on average, only one fifth of what they could have made had everyone joined market two. In other words, there were enough lone wolves who chose to bank on their own returns instead of trusting others that those who did trust in the collective good ended up doing worse.
But then Ostrom changed the rules. Now, in the middle of the game, she called everyone to stop playing and come join her for a quick refreshment break. There, they could all meet one another face-to-face. They would no longer be playing a bunch of anonymous bids; they could now put names to the numbers they’d seen flashing on their screens. As the players returned to their separate computers, the game became far more cooperative. Now the payoff was about 80 percent of the maximum possible amount. All it had taken was that brief moment of social exchange. In Ostrom’s study, one final tweak brought performance to a new high: the ability to request that “defectors,” that is, those who opted for market one, be punished by the experimenter with a fine.
Economist Robert Axelrod has found that one of the most successful strategies in games without communication—where we don’t know who the players are and can’t rely on what we know about them—is the tit-for-tat. You start off cooperating on your first turn. After that, you mirror what your partner did. If she cooperated, you reciprocate; if she defected, so do you. That way, if you both play nice at first, you establish a cooperative equilibrium early on. You are, in a sense, building your reputation as you go. That only works, of course, if you get to play repeatedly, as in Ostrom’s games. Otherwise, you need to come in with a reputation to begin with—but for that to happen, anonymity cannot be the norm. Luckily, in the real world, it almost never is.
A reputation is a shortcut. It lets us know how someone will likely act and how we should respond to them even if we’ve never met or spent any time getting to know them. The famous prisoner’s dilemma offers a useful example here: in this dilemma, if both prisoners cooperate—that is, stay quiet—they both get off; but if one defects and talks while the other doesn’t, the one who stayed quiet gets the roughest end of the deal. One way to solve the problem would be to allow for communication—but the very essence of the dilemma is that communication is not permitted. If you could agree not to tell on each other, you’d both get off. But you don’t have that chance. What to do?
If we already have a certain reputation, it communicates for us. If we have never ratted someone out, chances are we won’t do it now. If we’ve been known to be shaky, others are much less likely to trust us now. In one study, Catherine Tinsley and her colleagues found that groups in which one person was known for his competitiveness ended up faring worse overall in a negotiation than others. People knew the reputation, were wary, and ended up not being able to come to as good of an agreement. What people know about us affects how they act toward us.
Oscar Hartzell had effectively built the blow-off into the scheme from the start, making sure the fix wouldn’t be necessary: part of buying into the idea of the Drake fortune was a commitment to staying quiet, lest the prize be blown. You didn’t want to tell, because that would change how others would act: you wouldn’t get your money. If you had a reputation of silence, you stayed in the game. If others thought you might talk, they could tell on you to the big boss, and you’d never get your payout.
But such a built-in forcing mechanism need not be the norm. Often, we don’t even need external inducement; our own sense of self is the only blow-off and fix we need. We want people to think good things of us—and we fear the opposite. An Ann Freedman wants others to see her as a discerning doyenne of the art world, not as someone who falls for a scam, and so she persuades herself that there can’t possibly be a scam even when it’s staring her in the face. Chances are, were the whole case not to have been blown open quite so dramatically, many of the buyers who are now suing the former gallery and director for the forgeries would have stayed shtum and quietly asked for refunds. You don’t want others to see you as an easy mark. (Indeed, not everyone who bought a Rosales painting has sued. And no one who has sued would speak to me; they did not want their names mentioned.)
Almost no one is immune to reputational sli
ghts, despite what they may want you to believe. We all say we don’t care what other people think, but when it comes down to it, most of us really do. We ourselves are the grifter’s best chance of a successful blow-off: we don’t want anyone to know we’ve been duped. That’s why the fix is so incredibly rare—why would it ever come to pressing charges, when usually all we want is for it all to quietly go away?
Our reputation is founded on how we act. We build it over time, by acting in ways that are consistent with the type of reputation we want. If we want to be feared, we punish, often and harshly. If we want to be loved, we reward, generously and frequently. If we want to be seen as fair dealers, we do the fair thing—like Victor Lustig, as he turned the fifty thou over to Mr. Capone.
Nicholas Emler, a social psychologist at the University of Surrey who studies reputational processes and gossip, argues that acting in ways consistent with a certain image is an important part of our social identity. “There are factual details of biography—in particular the history of their relationships with others—that people will piece together about one another and which draw on reports from third parties,” he argues. “But reputations are also judgments, about vices and virtues, strengths and weaknesses, based on accumulating patterns of evidence which societies constantly process and reprocess.”
We want to be seen as a certain type of person and so we act how that person would act. In her work on negotiation, NYU social psychologist Shelly Chaiken has repeatedly found that people employ tactics consistent with a particular reputation depending on how they want others to respond to them and what they hope to achieve. For instance, to get large concessions, they often act the part of the tough negotiator. In an unrelated study, researchers found that people often planned out how they would act in a situation in advance, so that they would convey a specific impression—and so develop a desired reputation. They would, for instance, follow rules like, “Be friendly so that he’ll think I’m giving him a good deal.” If he thinks that way, he’ll tell others I’m fair. And then everyone will think better of me.
In all of these cases, however, something central is happening, regardless of the behavior: somebody else is watching. It doesn’t matter if we act a certain way in private. What matters is that others see us acting in that way and pass it on to others—Dunbar’s gossip pipeline in action. Anonymous charitable donors are relatively rare, and often the anonymity is thinly veiled. Someone already has a reputation for giving generously and anonymously to certain types of places, so by the venue and the amount, it becomes clear who that someone is. “Social identities,” Emler writes, “are conferred or agreed by the collective, not merely assumed by the individual.”
We don’t just care about acting in certain ways. We care that others see us doing so. In one study, Emler and Julie Pehl asked a group of students to imagine that they’d been part of either good or bad events, for which they either were or weren’t responsible. For instance, they’d won a competition through luck or a scholarship through hard work, had gotten into a driving accident, or been falsely accused of theft. Some people were told that an acquaintance had seen it happen; others weren’t told anything. How much effort, if any, would they go to to share the event with others, from close friends to casual acquaintances?
In cases where seemingly no one was there to witness the event, people were likely to go to a lot of effort to share the news that cast them in a good light, and were slightly less enthusiastic about sharing the negative events. The negative, they would share only with close friends and family; the positive, with the world. But that changed if others had been there: now they would go to great lengths to share the negative—so that they could put their own spin on things. They expected their witnesses to talk, and to talk quickly. It was therefore crucial to outmaneuver them and do some damage control.
We may even go so far as to behave differently in public and private. In 2010, psychologist Mark Whatley and his colleagues recruited a group of students to participate in what they thought was an art evaluation study. They would arrive, soon be joined by another student (actually a confederate), and spend some time looking at and responding to slides of different paintings. After the sixth painting, the experimenter turned on the lights: there would be a three-minute break to let the students rest their eyes.
At the end of the break, both students returned to the room. In some cases, the confederate came back with two packets of M&M’s. She’d picked them up from a vending machine, she explained, and decided to get some for the other student, too. In other cases, she would return empty-handed. The students then finished looking at the slides, and were taken to separate rooms for a private response questionnaire. A few minutes later, the experimenter would return, explaining that the other student (the confederate) had had to leave to go to work—she’d been dropping hints about being late for her job all evening. She did, however, leave behind some charity donation forms that she had asked to pass along. The experimenter then left the room, leaving the actual student with a fake questionnaire and a real charity donation form.
What Whatley found was that two factors were important in determining whether someone decided to donate, and if so, how much. Half the time, the pledges were private. That is, the contribution would be anonymous, sent directly to the charity, Run for the Kids. Half the time, the pledges were public. That is, the student had to list her name and address, and the envelope itself was addressed to the organization, attention of the confederate. That alone substantially changed behavior. Not only did more people donate in the public condition, but they gave a substantially higher amount: $3.98, as compared with the $1.87 average donation in the private condition.
The second factor that made a difference: the M&M’s. People who’d gotten a favor became more likely to reciprocate by making a donation: $3.45, on average, compared with $2.32 for those who hadn’t gotten any candy.
Public reputation, Whatley concluded, mattered a lot. We care how we’re seen, and will act differently if we think someone is watching than if we think no one will notice. And we care about reciprocity: we expect others to treat us well if we’ve done them a good turn, so we’ll do a good turn to those who’ve treated us well (i.e., fed us some delicious candies).
When I was in my early twenties, freshly moved to New York City, I went on a date with a young man who’d also recently graduated from the same college. At some point in the evening, we found ourselves walking through Washington Square Park.
“Excuse me!” An obviously upset man came up to us. He was dressed neatly, a light jacket over a button-down and slacks. “I’m sorry to bother you,” he went on, a look of real consternation on his face, “but I need money for the train. I’ve forgotten my wallet and I can’t get home to New Jersey. Please, my family’s waiting. Anything at all you could spare would help.” Ever the savvy New Yorker, I gave him a skeptical eyebrow. “I’ll pay you back,” he continued. “Just give me your address, and I’ll send you the cash the moment I get home.” I remained unconvinced. My date, however, pulled out his wallet and handed him a ten-dollar bill. “Don’t worry about returning it,” he told the man.
Our poor train-missing gentleman had assessed the situation perfectly. A date, probably an early one. The man still wants to make an impression. Approach him, make your plea—and he’ll be generous. He doesn’t want the girl to think he’s cold or, god forbid, stingy. The story he told, too, was perfectly calibrated. He was a businessman commuting from New Jersey. He had a family. He just wanted a little help, not the whole train fare. And he was credible: he’d return it all. He wasn’t begging, just asking for a moment’s help. Who were we to refuse?
Actually, who were we to refuse? Later that evening, I was filled with guilt. Why was I so skeptical of the human race? Wouldn’t I want someone to help me if I’d ever lost a wallet or found myself cashless without a phone or way of getting home? At the time, I lived just a few blocks from Washington Square. The next evening, I made my way back and sat on a bench jus
t to see what might happen. Sure enough, I heard a familiar voice: “Excuse me, I’m sorry to bother you . . .” I got up and left, guilt fully assuaged.
Reputation is why so many frauds never come to light, why the blow-off is the easiest part of the game, and the fix a rare occurence indeed. The Drake fraud persisted for decades—centuries, in fact—because people were too sheepish about coming forward after all that time. Our friend Fred Demara was, time and time again, not actually prosecuted for his transgressions. People didn’t even want to be associated with him, let alone show who they were publicly by suing him. The navy had only one thing to say: go quietly—leave, don’t make a scene, and never come back. The monasteries went even further: they didn’t want Robert Crichton to even write about Demara’s time there. Some wrote incensed letters begging to be kept out of the whole thing. They didn’t want the good men of God to be tarnished by association with that no-good son-of-a-bitch bit swindler.
Warren Buffett puts it this way: “It takes twenty years to build a reputation and five minutes to ruin it.”
The Confidence Game Page 32