Not Born Yesterday
Page 11
For instance, the correlation between amount of eye contact and
likelihood of deception is literally 0, while it is a very modest, and
essentially useless, .05 for gaze aversion.11 As stated in a recent
review of this lit er a ture: “Prominent behavioral scientists con-
sistently express strong skepticism that judgments of credibility
can reliably be made on the basis of demeanor (body language)
cues.”12 Because no reliable cue exists, even supposed experts,
people who are paid to spot liars as a job, are no more likely than
chance to tell, from behavioral cues alone, who lies and who
doesn’t.13
How Not to Catch a Liar
Why are there no reliable behavioral cues to lying and decep-
tion? As mentioned earlier, a proximal reason— a reason that
relies on the functioning of our psychological mechanisms—is
that people feel conflicting emotions whether they are lying or
telling the truth, which makes it difficult to distinguish liars
from truth tellers. An ultimate reason— a reason that relies on
evolution—is that such cues couldn’t be evolutionarily stable.
If such cues had ever existed, they would have been selected
out, in the same way that a poker player should not have any
tell when bluffing, at least if they want to keep playing poker
and remain solvent. Evolutionarily valid behavioral cues to
82 ch ap t er 6
deception would be maladaptive— and, indeed, there don’t seem
to be any.
You may be wondering if that isn’t a major prob lem for the
argument I have been making, namely, that we are naturally vigi-
lant toward communicated information. How can we be vigi-
lant if we can’t tell a lie from the truth? To make things worse,
in most lie detection experiments, participants tend to err on
the side of thinking people are tel ing the truth.
Some researchers, most notably psychologist Tim Levine,
have argued that this be hav ior makes sense because people ac-
tually tell very few lies.14 Studies of lying in everyday life suggest
that lies are rare— fewer than two a day on average— and that
most of them are innocuous, such as feigning to be happier
than we are (this is at least true among some samples of Ameri-
cans).15 Instead of spending a lot of energy catching these
minor deceits, we’re better off assuming every one is truthful.
This is reminiscent of the argument developed by phi los o pher
Thomas Reid in the eigh teenth century, when he claimed that
our “disposition to trust in the truthfulness of others, and to
believe what they tell us” is related to our “propensity to speak
the truth.”16
From an evolutionary perspective, the Reid/Levine argu-
ment doesn’t hold. Given how often senders would benefit
from lying, if the amount of lying wasn’t somehow constrained,
it would balloon until no one could be trusted anymore. If we
simply assumed people were generally truthful, they would stop
being truthful. I’m sure you can think of a few lies you’d tell if
you were guaranteed that people would believe you and you’d
never get caught.
If we cannot rely on behavioral cues, how do we deal with the
issue of deceit in communication? How do we know who to
trust?
w h o t o t r us t ? 83
Negligence and Diligence
Because deceit relies on hidden intentions, it is intrinsically hard
to detect. We would have no idea what most people’s intentions
are if they did not tell us. In many cases concealing one’s inten-
tion is as simple as not voluntarily disclosing it. This is why prov-
ing in court that someone perjured themselves is difficult: it
must be established not only that the accused was mistaken but
also that they knew the truth and intentionally withheld it.17
But deceit is not the only, or even the main, danger of com-
munication.18 Imagine that you are looking to buy a used car. The
salesman might lie to you outright: “I have another buyer very
keen on this car!” But he is also likely to give you misguided ad-
vice: “This car would be great for you!” He might believe his
advice to be sound, and yet it is more likely to be driven by a
desire to close the sale than by a deep knowledge of what type
of car would best suit your needs. Now you ask him: “Do you
know if this car has ever been in an accident?” and he answers
“No.” If he knows the car has been in an accident, this is bad. But
if he has made no effort to find out, even though the dealership
bought the car at a suspiciously low price, he is culpable of neg-
ligence, and this isn’t much better. In the case at hand, whether
he actually knew the car had been in a crash, or should have
known it likely had been, makes little difference to you. In both
cases you end up with a misleading statement and a lemon.
Deceit is cognitively demanding: we have to think of a story,
stick to it, keep it internally coherent, and make it consistent with
what our interlocutor knows. By contrast, negligence is easy.
Negligence is the default. Even if we are equipped with cogni-
tive mechanisms that help us adjust our communication to what
others are likely to find relevant, making sure that what we say
includes the information our interlocutor wants or needs to hear
84 ch ap t er 6
is difficult. Our minds are, by necessity, egocentric, attuned to
our own desires and preferences, likely to take for granted that
people know every thing we do and agree with us on most
things.19
We should thus track the relative diligence of our interlocu-
tors, the effort they put into providing information that is valu-
able to us. Diligence is diff er ent from competence. You might
have a friend who is very knowledgeable about food, able to dis-
cern the subtlest of flavor and to pick the perfect wine pairings.
It makes sense to ask her for tips about restaurants. But if she
makes no effort whatsoever to adjust her advice to your
circumstances— your tastes, your bud get, your food
restrictions— then the advice is not of much use. If you repeat-
edly tell her you are a vegetarian and she sends you to a steak-
house, she hasn’t been diligent in finding the right information
to communicate. You would be justified in resenting this failure,
and trusting her advice less in the future.
Stressing diligence— the effort people make in sending us use-
ful information— over intent to deceive shifts the perspective.
Instead of looking for cues to deception, that is, reasons to re-
ject a message, we should be looking for cues to diligence, that
is, reasons to accept a message.20 This makes more sense from
an open vigilance point of view as the baseline, then, is to reject
what we’re told, unless some cues suggest our interlocutors have
been diligent enough in deciding what to tell us.
Incentives Matter
When are our interlocutors likely to have done due diligence (in-
cluding, obviously, not decei
ving us)? Simply, when their in-
centives are aligned with ours: when they’re better off if we’re
better off. There are, broadly, two reasons why incentives
w h o t o t r us t ? 85
between diff er ent individuals would be aligned. Sometimes,
incentives are natural y aligned. For example, if you and your
friend Hadi are moving a clothes dryer together, you both have
an incentive to do it as effortlessly as pos si ble by coordinating
your actions, so that you lift at the same time, go in the same
direction, and so forth. As a result, if Hadi tel s you, “Let’s lift at
three,” you have every reason to believe he will also be lifting at
three. Other natu ral alignments in incentives are more long term:
parents have a natu ral incentive for their children to do well,
and good friends for each other to be successful.
A simple thought experiment tel s us whether or not incen-
tives are naturally aligned: we just have to consider what would
happen if the receiver of the information didn’t know who the
sender really was. For example, Hadi would still want you to
know that he will be lifting his end of the dryer at three, even if
he weren’t the one tel ing you when he’d be lifting. Likewise, a
mother who wants to convince her son that he should study
medicine doesn’t care if she is the one doing the convincing, as
long as the son ends up a doctor.
On the whole, we are quite good at taking natu ral alignments
between incentives into account: when we have evidence that
our incentives and those of the sender are well aligned, we take
their opinion into account more. This is neatly demonstrated in
a study by psychologist Janet Sniezek and her colleagues.21 Ad-
visers were asked for their opinions on a random topic (the price
of backpacks), and the researchers observed how much partici-
pants took this opinion into account. After they had received
feedback on the real price of the backpacks, some participants
could decide to reward the advisers, and the advisers knew this
before offering their advice. The advisers had incentives to pro-
vide useful advice, and this was mutual knowledge for them and
the participants. As a result, participants put more weight on
86 ch ap t er 6
the opinion of these advisers, whose incentives were aligned
with theirs.22
A much more dramatic example of our ability to start trust-
ing people when we realize our incentives are aligned is that of
Max Gendelman and Karl Kirschner.23 Gendelman was a Jew-
ish American soldier who had been captured by the Germans in
1944 and was held prisoner close to the eastern front. Kirschner
was a wounded German soldier recovering at home, close to the
prison camp. The two had met while Gendelman was in the
camp; when he managed to escape, he took refuge in Kirschner’s
home. Kirschner told Gendelman that, as a German soldier, he
had to flee the Rus sian advance, and that the two should help
each other. Gendelman needed Kirschner to avoid being shot by
the Germans looking for escapees. Kirschner needed Gendelman
to avoid being shot by the Americans once they would reach their
lines. This alignment in their incentives allowed the two former
enemies to communicate and col aborate until they were safely
behind American lines.24
If people can put more weight on what their interlocutor says
when they detect an alignment in their incentives, they can also
stop listening to what their most trusted friends or dearest family
members say when their incentives aren’t aligned. This is what
happens when friends play competitive games, from poker to
Settlers of Catan. Primary-school children are also able to take
incentives into consideration when deciding whether to believe
what they’re told. Psychologists Bolivar Reyes- Jaquez and Catha-
rine Echols presented seven- and nine- year- olds with the fol-
lowing game.25 Someone— the witness— would see in which of
two boxes a candy was hidden. Someone else— the guesser—
would pick one of the two boxes to open. The witness could sug-
gest to the guesser which box to open. In the cooperative condi-
tion, both witness and guesser got a treat if the guesser opened
w h o t o t r us t ? 87
the right box. In the competitive condition, only the guesser got
a treat if the right box was open, and the witness got a treat if the
guesser opened the wrong box. In the cooperative condition,
children in the role of guesser always believed the witness. By
contrast, in the competitive condition, they rightfully ignored
what the witness was saying, picking boxes at random.26
Following the same logic, children and adults are wary of self-
interested claims. Seven- year- olds are more likely to believe
someone who says they just lost a contested race than someone
who claims to have won it.27 Plausibly the most impor tant factor
adults take into consideration when deciding whether or not
someone is lying is that individual’s motivation: having an incen-
tive to lie makes someone’s credibility drop like a stone.28
Incentives can be more or less naturally aligned, but they are
rarely, if ever, completely aligned. Hadi might like you to carry
the heavier side of the dryer. The mother might want her son to
be a doctor in part because she would gain in social status. Your
friend might want you to be successful, but perhaps not vastly
more successful than himself. Fortunately, humans have devel-
oped a great way of making incentives fall in line: reputation.
Reputation Games
Considering the natu ral alignment of incentives is essential when
assessing the credibility of the sender of a message, but it is not
enough on its own, as it does not help us solve the essential prob-
lem with the evolution of communication: What happens when
incentives diverge?29
What we need is some artificial way of aligning the incentives
of those who send and those who receive information. Punish-
ment might seem like the way to go. If we punish people who
send us unreliable information, by beating them up, say, then they
88 ch ap t er 6
have an incentive to be careful what information they send us.
Unfortunately (or not), from an evolutionary perspective this
intuitive solution doesn’t work as well as it seems. Inflicting pun-
ishment on someone is costly: the sender being beaten up is
unlikely to take their punishment passively. If the damage from
the harmful message is already done, incurring the further cost
of punishing its sender doesn’t do us any good. Punishment is
only valuable as a deterrent: if the sender can be persuaded, be-
fore they send a message, that they will be punished if they send
an unreliable message, they will be more careful.30
The question thus becomes one of communication: How do
we communicate that we would be ready to punish people who
send us unreliable messages? At this stage, the conundrum of the
evolution
of reliable communication rears its head. Every body,
including people who don’t have the means or the intention of
punishing anybody, would be better off tel ing every one they wil
be punished if they send unreliable messages. Individuals who
would genuinely punish unreliable senders need a way of mak-
ing their signals credible. Far from solving the prob lem of reli-
able communication, punishment only works if the prob lem of
reliable communication has already been solved.
Fortunately, humans have evolved ways of cooperating and
aligning their incentives by monitoring each other’s reputations.31
For a very long time, humans who were either bad at selecting
cooperation partners or at being cooperation partners haven’t
fared very well, at least on average. For the worst cooperation
partners, ostracism was like a death sentence: surviving on our
own in the wild is next to impossible.32 As a result, we have be-
come very good at selecting cooperation partners and at maxi-
mizing the odds that others would want to cooperate with us.33
Being a diligent communicator is a crucial trait of a good co-
operation partner. Receivers should be able to keep track of
w h o t o t r us t ? 89
who is diligent and who isn’t, and adjust their future be hav ior
on this basis, so that they are less likely to listen to and cooper-
ate with people who haven’t been diligent. If this is true, senders
have an incentive to be diligent when they communicate with
receivers with whom they might want to cooperate—or even
receivers who might influence people with whom they might
want to cooperate. The incentives between senders and receiv-
ers have been socially aligned.
If, thanks to the social alignment of incentives, we can increase
senders’ wil ingness to be diligent in what they tell us, we can’t
expect them to always be maximally diligent. This would be an
unfair demand. For Simonetta, a foodie friend of yours, to give
you the best pos si ble advice, she would have to learn every thing
about your tastes, which restaurants you have eaten at lately, what
price you can afford, who you plan on taking out, and so on. If
people were always expected to be maximally diligent, and if they
could expect negative consequences— decreased trust, less
cooperation— from failing to reach this goal, they would often