Not Born Yesterday
Page 30
line media. This makes sense: a TV channel can attempt to por-
tray the other side as made up of crazy extremists, but on social
media, these crazy extremists are there for all to see, and it is easy
to forget that they represent only a sliver of the population. So-
cial media don’t make us more polarized, but they make us think
we are; more precisely, social media don’t push their users to de-
velop stronger views but, through increased perceived polariza-
tion, they might contribute to increased affective polarization,
as each side comes to dislike the other more.5
When agents with wide audiences take sides, they are incen-
tivized to create a distorted view of the co ali tional stakes—
making the other side appear stronger, creating conflicts out of
nothing. If this strategy is successful, it can yield further epis-
temic distortions.
Agents that are perceived as taking our side in conflicts against
a power ful enemy gain our trust: we believe they have our best
interests at heart. Moreover, as they provide us with information
that supports our views, they also come to be seen as competent
a n g r y p und i t s a nd sk il l f ul c o n me n 245
(as explained in the last chapter). In some cases at least, this strat-
egy works: for instance, conservative Republicans find Fox
News to be more credible than CNN, historically a broadly neu-
tral network (although things are changing with the Trump
presidency).6
This increased trust allows the transmission of some false
information, at least at the margin. Cable news networks with
a po liti cal slant spread more falsehoods than more neutral net-
works.7 This obviously doesn’t mean all these falsehoods are
believed; still, the attempt betrays an assumption by the net-
works that they won’t be questioned. More impor tant, the
asymmetry in trust— when we trust people deemed to be on
our side much more than those deemed to be on the other
side— hinders the transmission of accurate information. We
aren’t challenged by the people we trust, and we don’t trust
the people who challenge us, potentially distorting what we
know.
A series of clever studies have investigated the effect of Fox
News Channel availability on po liti cal opinions and po liti cal
knowledge. These studies rely on the fact that Fox News Chan-
nel was introduced in diff er ent U.S. towns in a somewhat hap-
hazard fashion, as a function of deals signed with local cable com-
panies. As a result, the researchers were able to look at the
effects of Fox News availability on a range of outcomes and treat
the results as if a giant randomized experiment had been con-
ducted. These data show that Fox News Channel did have an
effect on po liti cal views, making towns where it was available
slightly more Republican leaning.8 What about po liti cal knowl-
edge? Fox News made people more selectively knowledgeable.9
Where Fox News was available, people tended to know more
about issues well covered by Fox (rather unsurprisingly), but also
246 ch ap t er 15
to know less about issues poorly covered by Fox. Fox mostly
covered issues for which the Republican Party was in broad
agreement with its base. As a result, viewing Fox News rein-
forced the impression that the Republican Party platform
aligned with the viewers’ opinions, strengthening support for the
party.10 Even if, in this case, the information being presented
might not have been entirely fair and balanced, this example
still supports Andrew Gelman and Gary King’s contention
that the media can affect po liti cal outcomes, but chiefly “by con-
veying candidates’ [or the parties’] positions on impor tant
issues.”11
While there is a danger that the hijacking of our co ali tional
thinking is turning the media landscape into increasingly vocif-
erous fights between partisan hacks, it’s good to keep in mind
that there are countervailing forces. We can recognize that media
personalities who appear to be on our side are, more often than
not, of little use to us. At best, they provide us with information
that justifies our views, but this information has to be sound to
be truly relevant, something we only discover when we use the
information in an adversarial debate. There is a social cost to be
paid when we attempt to justify our views with arguments that
are too easily shot down. Apart from those that cater only to
extreme partisans, most media thus have an incentive to stick
to largely accurate information— even if it can be biased in a
number of ways.12 Moreover, our reaction to challenges isn’t
uniformly negative. In a fight with our partner, we might get
angry at a friend who supports our partner instead of us. But, if
they make a good point that we’re in the wrong, we’ll come to
re spect them all the more for helping us see the light (although
that might take a little time). We’re wired to think in co ali tional
terms, but we’re also wired to form and value accurate beliefs,
and to avoid looking like fools.
a n g r y p und i t s a nd sk il l f ul c o n me n 247
Trust in Strangers
When it comes to social media personalities or news channels,
at least we have time to gauge their value as information provid-
ers, as we see them on TV night after night. What about people
we have only just met? How do we know whether they have our
interests at heart? Given the lack of information about these
strangers’ past be hav ior, we must rely on coarse cues about their
personality, the groups they belong to, and their current situa-
tion. These cues range from the very general (does this individ-
ual appear trustworthy?) to the very specific (is this individual
well disposed toward me at this moment?).
As an example of a general trait, consider religiosity. In some
cultures religious people are seen as particularly trustworthy.13
As a result, in these cultures people who wear badges of religious
affiliation are seen as more trustworthy even by the nonreli-
gious.14 By contrast, other cues indicate trustworthiness only in
the context of specific relationships. In a series of experiments,
students were asked to say whom they would trust to be more
generous toward them: another student from their own univer-
sity, or a student from another university. The participants put
more trust in the students at their own university, but only if they
knew the students also knew the participants belonged to the
same university. The participants did not think their fellow stu-
dents would be more generous as a rule, only more likely to
prove generous with those sharing an affiliation.15
People rely on a variety of cues to decide who they can trust,
from displays of religiosity to university affiliation. But how do
these cues remain reliable? After all, if appearing to be religious,
or to belong to the local university, makes one more likely to be
 
; trusted, why wouldn’t every one exhibit these cues whenever it
could be useful? These cues are kept broadly reliable because
248 ch ap t er 15
they are in fact signals, involving some commitment from their
sender, and that we keep track of who is committed to what.
Someone who wears religious clothes but does not behave like
a religious person will be judged more harshly than someone
who behaves in the same way but does not display religious
badges. In an extreme use of religious badges, Brazilian gang
members who want a way out can now join a church, posting a
video of their conversion on social media as proof. But this isn’t
a cheap signal. Members of other gangs refrain from retaliating
against these new converts, but they also keep close tabs on
them. When a young man posted his conversion video just in
time to avoid being killed, the rival gang members “monitored
him for months, checking to see if he was going to church or had
contact with his former [gang] leaders.”16
More generally, we tend to spurn people who pretend to be
what they aren’t. If I walked around hospitals in scrubs wearing
a “Dr. Mercier” tag, people would be justifiably annoyed when
I revealed that my doctorate is in cognitive science. Even a con-
struction worker who dressed and behaved like a rich business-
man would face difficulties integrating with other workers, or
with rich businessmen.
Stil , some people can, at least in part, get away with pretend-
ing to be who they aren’t. Con men are a good example.17 In The
Sting, the characters played by Robert Redford and Paul New-
man describe their world as that of grifters, opposed to the world
of citizens, a world to which they couldn’t and wouldn’t want to
belong. Big cons took time, as the hustlers had to progressively
earn the mark’s trust, to “play the con for him” (as the protago-
nists do in The Sting).18 This involved letting the mark get to know
the con men, allowing the mark to earn some money, and set-
ting up such an elaborate story that it became a stretch to believe
it was all made up. The con perpetuated in The Sting— inspired
a n g r y p und i t s a nd sk il l f ul c o n me n 249
by real life— involved renting a room, disguising it as a betting
saloon, and hiring dozens of actors to play the role of other gam-
blers. It is a won der that more people did not fall for such cons.
Minor cons, by contrast, require minimal contact between the
con man and the mark. The first man to be called a con man was
Samuel Thompson, who operated around 1850 in New York and
Philadelphia.19 He would come up to people, pretend to be an
old acquaintance, and remark on how people did not trust each
other anymore. Making his point, he would wager that the mark
wouldn’t trust Thompson with their watch. To prove him wrong,
and to avoid offending someone who appeared to be a forgotten
acquaintance, some people would give Thompson their watch,
never to see him or their watch again.
Thompson relied on his “genteel appearance” (a coarse cue
indeed) to pressure his victims: they might not have trusted him
altogether, but they feared a scene if they blatantly distrusted
someone of their own social standing.20 This is how the fake doc-
tor from the introduction got me to give him twenty euros.
Once you accept the premise that someone is who they say they
are, a number of actions follow logically: had that person been
a real doctor, I should have been able to trust him with the money.
And rejecting the premise, saying to someone’s face that we think
they are a fraud, is socially awkward.
The same techniques are used in social engineering: instead
of hacking into a computer system, it is often easier to obtain the
desired information from a human. In The Art of Deception,
hacker and social engineer Kevin Mitnick describes how valu-
able information can be extracted from employees. In one ex-
ample, the social engineer calls up an employee, pretends to be
from a travel agency, and makes up a phony trip that the em-
ployee supposedly booked.21 To understand how the error
might have occurred, the employee is asked to provide his
250 ch ap t er 15
employee number, which later allows the social engineer to im-
personate him. Again, the employee was relying on coarse cues:
that the individual on the line sounded like a genuine travel
agent.
The example of con men and social engineers suggests that
relying on coarse cues to trust strangers is a daft move, easily
abused. In fact, conning people is harder than it seems. For one
thing, we mostly hear about the cons that work. In total, six
people lodged official complaints against Thompson for
theft— not a huge number to start with, and we don’t know
how many people he had tried his luck with and failed.22 Indeed,
by all accounts he was a “clumsy thief and unsophisticated
scammer.”23
Ironically, that most egregious of cons, the 419 scam, or Ni-
gerian scam, illustrates how hard scamming really is.24 A few
years back, we were bombarded with e- mails alerting us to a won-
derful opportunity: someone, often from Nigeria, had a huge
amount of money and offered us a cut of the pie if we would only
wire them the small sum they needed to access a much bigger
sum. This small investment would be repaid a hundredfold. See-
ing these ludicrous messages, it is quite natu ral to think people
incredibly gullible: How could anyone fall for such tall tales,
sometimes losing thousands of dol ars?25 In a perceptive analy-
sis, computer scientist Cormac Herley turned this logic on its
head: the very ludicrousness of the messages shows that most
people are, in fact, not gullible.26
Herley started by wondering why most of these messages
mentioned Nigeria. This scam had quickly become associated
with the country, so much so that scam was one of the top auto-
completes after typing Nigeria. Why, then, keep using the same
country? Besides the country, there was clearly little attempt at
credibility in the messages: the sender was a prince ready to part
a n g r y p und i t s a nd sk il l f ul c o n me n 251
with a good chunk of a huge sum, not exactly a common occur-
rence. Why make the messages so blatantly suspect? Herley
noted that while sending millions of messages was practically
free, responding to them cost the scammers time and energy.
After all, no one would be sending the money right away. Instead,
marks had to be slowly reeled in. Expending such effort was only
worthwhile if enough marks ended up falling for the scam hook,
line, and sinker. Anyone who would do a Google search, ask for
advice, or read their bank’s warning notices wouldn’t be worth
expending any effort on. The solution scammers adopted to
weed out these people was to make the messages voluntarily pre-
posterous. In this way, the scammers ensured that any effort
&nb
sp; spent engaging with individuals would only be spent on the most
promising marks, those who were the least well informed. Ironi-
cally, if these scam attempts are so ludicrous, it is not because
people are gullible but because, by and large, they aren’t. If they
were, scammers could cast a much broader net with more plau-
sible messages.
Effective Irrational Trust
Not only is getting conned a relatively rare occurrence, but there
is a huge benefit from relying on coarse cues to trust strangers:
it allows us to trust them at all. Economists and po liti cal scien-
tists have devised a great variety of so- called economic games to
test whether people behave rationally in simple, stylized inter-
actions. One of these is the trust game, in which one player (the
investor) is provided with an initial monetary endowment. They
can choose how much to invest in the second player (the trustee).
The amount invested is then multiplied (typically by three), and
the trustee can choose to give back any amount to the investor.
To maximize the overall benefits in a fair manner, the investor
252 ch ap t er 15
would give all the money to the trustee, who would then give half
of it back. However, once the investor has transferred the money,
nothing stops the trustee from keeping it all. Knowing this, the
investor should not transfer anything. No transfer is thus, in the-
ory, the rational outcome. Moreover, messages from the trustee
to the investor should have no effect, since they are the quin tes-
sen tial cheap talk: investors can promise to give back half of the
money, but no extrinsic force can make them keep their word.
Yet many experiments have found that investors typically
transfer a good chunk of their endowment, and that trustees tend
to share back some of the proceeds.27 Moreover, promises work.
When trustees are given the opportunity to send a message to
the investors, they often promise to send money back. Investors
are then more likely to transfer money to the trustees, and trust-
ees to share the money back.28 The mere fact that someone has
made a promise is sufficient to increase the level of trust, thereby
generating a superior (even if, in a way, less rational) outcome.
In this case, the coarsest cue— that the trustee would be a broadly
similar person to the investor—is sufficient to generate some