Handbook of Psychology of Investigative Interviewing: Current Developments and Future Directions
Page 59
ological issue will undoubtedly yield results of relatively limited practical utility.
Accordingly, in reviewing the existing research, individuals are urged to do so
cautiously and critically.
One potential solution to these problems is to employ a different method
from that traditionally used in research on evaluating truthfulness: a series of
case studies in which verbal and nonverbal behaviour are examined and deter-
minations of truthfulness are made on an individual basis via an empirically -
grounded and experience
- informed approach (see below). With such an
approach, quantitative and qualitative statistics could be utilized. While indi-
vidual cases should be evaluated qualitatively, individual cases can thereafter
be aggregated and analysed quantitatively. Not only would this approach serve
to overcome the limitations discussed above, it would also help focus research-
ers on developing better - informed approaches to evaluating truthfulness as
opposed to searching for the all - elusive ‘ signs ’ of deception. As expanded on
below, such diagnostic signs have yet to reveal themselves and, moreover, are
likely not to exist. Of course, single - case research designs come with their own
complexities. That is, they are labour - intensive and costly, which may explain
why this approach has never gained favour in such a competitive, publication -
driven arena. Nevertheless, we argue that case studies will prove very useful in
understanding how to evaluate truthfulness in applied contexts.
308
Handbook of Psychology of Investigative Interviewing
Pre - training accuracy in evaluating truthfulness
One of the major fi ndings in the research on evaluating truthfulness is that
it has been repeatedly demonstrated that most individuals, irrespective of
professional background, are poor at distinguishing truths from lies. Ekman
& O ’ Sullivan (1991) examined the ability of a large group of professionals and
non - professionals, including police offi cers, secret service agents, polygraphers,
psychiatrists and college students, to evaluate truthfulness by showing them a
series of videos of individuals lying or telling the truth. Some video clips
depicted individuals lying or telling the truth about their opinions on sensitive
subjects, such as the death penalty, while others depicted individuals lying or
telling the truth about their participation or non - participation in a mock crime.
The researchers showed that there was no relationship between gender and
the ability of the participants to tell who was lying and who was telling the
truth. There was no relationship between years as an investigator/professional
and the ability to evaluate truthfulness. There was also no relationship between
confi dence in one ’ s ability to evaluate truthfulness and one ’ s actual ability.
Men have been found to be more confi dent in their wrong decisions (e.g.,
Porter, Woodworth & Birt, 2000 ), once again highlighting the importance
of considering individual differences. The major fi nding from Ekman
&
O ’ Sullivan ’ s ( 1991 ) study was that, as a group, participants were shown to be
able to differentiate truth from lies only at chance levels. Only one subgroup,
the secret service agents, was demonstrated to evaluate truthfulness at a level
higher than chance (64%), although only marginally so and not to levels neces-
sary for effective job performance. The fl avour of Ekman & O ’ Sullivan ’ s results
has been replicated with different stimuli and participants, suggesting that most
people, irrespective of profession and experience, cannot accurately evaluate
truthfulness (Porter et al ., 2000 ).
Roadblocks to the accurate evaluation of truthfulness
Research has demonstrated that there are a number of roadblocks that prevent
individuals from accurately evaluating truthfulness (Ekman,
1992 ; Herv
é ,
Cooper & Yuille, 2008 ; Vrij, 2000 ). Heading the list is a lack of evidence -
based knowledge and skills specifi c to evaluating truthfulness, which results in
individuals relying on their ‘ experience ’ and/or popular myths (see below).
More generally, another roadblock refl ects a lack of critical thought. Critical
thinking is a necessary, but not suffi cient, component in conducting evalua-
tions and to evaluating truthfulness within such evaluations. Each roadblock
is discussed in turn.
In terms of lack of knowledge, research indicates that most individuals do
not know what lies and truths look like (Akehurst, Kohnken, Vrij & Bull,
1996 ; Ekman & O ’ Sullivan, 1991 ; Porter et al ., 2000 ; Vrij, 2004 ). It is clear
Evaluating Truthfulness
309
that people rely on certain clues related to what they think lies and truths look
like; however, research indicates that, more often than not, such heavily relied
upon clues (e.g., all liars will experience anxiety/fear and, therefore, avoid eye
contact; Ekman, 1992) are wrong. Such clues are simply myths, often perpetu-
ated in the media and in professional manuals, but lacking empirical support.
With regards to skills, if the skills required for the job are lacking in breadth
and depth, the job cannot be performed adequately. For instance, if evidence -
based approaches are not used for the assessment of risk for recidivism, there
will be substantial false
- positive and false
- negative errors made (Monahan,
1981 ). The same is true with respect to evaluating truthfulness: if the right
‘ tools for the job ’ are absent, it is impossible to do that job. This is especially
notable in this context given that the vast individual differences in how people
reveal their lies dictates a need for a vast arsenal for detecting lies. Nevertheless,
it is sometimes the case that, even if people have the right tools for the job,
they are using them in the wrong way. For example, individuals could be
trained in proven approaches for investigative interviewing and in evaluating
verbal clues to credibility (i.e., two approaches integral to evaluating truthful-
ness), but such skills could still be poorly applied (i.e., rigidly rather than fl uidly
and fl exibly). It is likely that this especially occurs over time; that is, too often
individuals fall prey to drift, thus illustrating the need for practice and quality
control. Finally, sometimes individuals fail to use the tools at all. The conse-
quences of the fi rst generation of risk assessments studies are a case in point.
In this generation, clinicians relied on their clinical opinion as opposed to
empirically validated risk inventories, and errors were made more often that
not (Steadman & Cocozza, 1974 ; Thornberry & Jacoby, 1979 ; for a review,
see Monahan et al ., 2001 ). A similar lesson has been learned in the area of
evaluating truthfulness: empirically validated tools are needed for the job!
Another roadblock relates to failing to consider how knowledge and skills
change over time. Within any area in psychology − and most other disciplines
for that matter − knowledge and skills change, as the evidence to support them
changes. Consistent with most asses
sment practices, the accurate evaluation
of truthfulness requires individuals to stay up to date with the literature.
Moreover, professionals have an ethical obligation to stay current in the litera-
ture related to their areas of practice. Keeping up to date with the literature
and implementing suggestions into clinical practice will prevent drift and
related problems.
Although proper knowledge and skills are clearly important, a lack of criti-
cal thought is arguably the major roadblock to accurately evaluating truthful-
ness. Unfortunately, it is not uncommon for individuals to fail to evaluate
each case on its own merits and to adopt a ‘ cookie cutter ’ approach to the
task at hand. Such lack of objectivity can frequently be traced to internal or
external factors. In terms of the former, poor psychological and/or physical
health and/or egos too often impact on evaluators ’ decision - making. With
regard to external factors, individuals may be pressed for time because of an
onerous workload or unreasonable deadlines. Moreover, lack of objectivity
310
Handbook of Psychology of Investigative Interviewing
may relate to being biased a priori against the person being assessed. Lack of
critical thinking also leads to a failure to consider alternative hypotheses. Just
because a given question appears to be a ‘ no - brainer ’ does not mean that it
should be treated as such. Indeed, the decisions that are made in the forensic
arena affect the lives and well - being of many individuals and, therefore, alter-
native hypotheses must be considered before a conclusion is made. Finally,
lack of critical thinking may lead to a failure to check and double - check con-
clusions drawn. The approach to evaluating truthfulness that is introduced in
this chapter requires individuals to frequently re - evaluate their conclusions in
light of the evidence that formed their conclusions. In fact, the business
of evaluating truthfulness is so complex that it requires a conscientious, quasi -
perfectionist approach.
The bottom line is that roadblocks to evaluating truthfulness need to be
overcome. That is, individuals need to know about evidence - based practice in
evaluating truthfulness. To this end, the following section outlines empirically -
based training components for the accurate evaluation of truthfulness. These
training components form the basis of the approach introduced in the follow-
ing section.
Evidenced - based training components for
the evaluation of truthfulness
A review of research on clinical decision - making in general and evaluating
truthfulness in particular suggests that training in evaluating truthfulness
involves four major areas: (i) bad habits need to be unlearned; (ii) evidence -
based knowledge about evaluating truthfulness needs to be acquired; (iii)
empirically - validated tools need to be learned and practiced; and (iv) a method
that emphasizes critical thinking in evaluating truthfulness needs to be used;
the latter of which is perhaps the most diffi cult area to train. Each component
is discussed in turn below.
Unlearning bad habits
Unlearning bad habits requires knowledge. Without basic, empirically - based
knowledge about evaluating truthfulness, individuals tend to make common
errors. As some researchers have suggested that the state of the research in
evaluating truthfulness is not yet adequate to support its use in practice (e.g.,
Vrij, Mann & Fisher, 2006 ), it is argued that, at the very least, individuals
should be informed of the errors, or myths, that riddle their work, as well as
methods to avoid committing such errors. Although many myths exist (see
Ekman, 1992 ; Vrij, 2000 ), they can be broadly categorized as being either
experiential or societal in nature, although these are not necessarily mutually
exclusive categories.
Evaluating Truthfulness
311
Experientially - driven myths stem from individuals
’ personal experiences.
For example, some people rely on what has been termed the ‘ me ’ theory of
behavioural assessment (Ekman,
1992 ). That is, they assume people will
behave as they do when telling the truth or lying. For example, when using
the ‘ me ’ theory, if someone avoids eye contact when lying, this person will
view others as lying when they avert their gaze. Unfortunately, this approach
more often than not results in what has been termed the
‘ idiosyncratic
error ’ − not taking into account the various unique behaviours of individuals
( ibid .). Not only may individuals differ within a culture (e.g., some people
often rub their noses; others manipulate the hair on their face routinely),
research has begun to identify important cross - cultural differences as well (e.g.,
eye gaze has been found to vary across cultures; McCarthy, Lee, Itakura &
Muir, 2006 ).
Some individuals, particularly those with experience in evaluating truthful-
ness, often rely on ‘ gut instincts ’ or on ‘ intuitions ’ about whether or not
someone is telling the truth or lying. It is not suggested that individuals should
ignore their instincts or intuitions; indeed, a recent review of research on
intuition has demonstrated that, at least occasionally, intuition can point
people in the right direction (Hodgkinson, Langan
- Fox
& Sadler
- Smith,
2008 ). However, we suggest that instincts/intuitions should not be viewed as
answers in and of themselves. Rather, they should be viewed as hypotheses to
be tested against the available evidence. If the data do not support the person ’ s
intuition/instinct, there should be no reason for a conclusion to be made
simply on intuition/instinct.
Another experientially - driven myth concerns the relationship between expe-
rience and accuracy in evaluating truthfulness. Regarding the fi ndings on
experience, the research has been mixed. Some (e.g., Ekman & O ’ Sullivan,
1991 ) report no benefi t from experience, but others (e.g., Mann, Vrij & Bull,
2004 ) have shown a positive benefi t from experience on detection of lies.
Experience can also produce overconfi dence, which unfortunately too often
leads evaluators to become myopic and, therefore, to seek the same false clues
time and time again. The research is clear: if people rely solely on their own
idiosyncrasies and/or experiences as the basis for their judgements for evaluat-
ing truthfulness, they are likely to be wrong most of the time (see Ekman,
1992 ; The Global Deception Team, 2006 ).
Societal - driven myths refl ect shared beliefs about
‘ the sign or signs
’ of
deception or of truth - telling (Ekman, 1992 ; Ford, 2006 ). In terms of truth -
telling, there are the common myths that maintaining eye contact and lack of
observable anxiety are reliable signs of honesty. Conversely, there are the
opposite myths that sweating, anxiety and/or fear are signs indicative of decep-
tion. This type of myth unfortunately results in what Ekman (1992) has termed
the ‘ Othello error ’ (after Shakespeare ’ s
tragedy, Othello ). Othello wrongfully
believed that his wife, Desdemona, had been unfaithful to him. When he
confronted her about her suspected infi delities, she presented as fearful.
Desdemona had considerable reason to be fearful, as Othello had already
312
Handbook of Psychology of Investigative Interviewing
murdered her suspected lover. Othello ’ s error occurred when he misattributed
Desdemona ’ s fear of being disbelieved as evidence of her guilt. It is important
to understand that fear of being disbelieved looks the same as fear of being
caught in a lie. That is, spotting an emotion only informs us about its kind,
not its source or cause (Ekman, 2003 ). Consequently, it is important to be
mindful of the reasons why someone may be experiencing an emotion in a
given circumstance.
Proponents of neuro - linguistic programming (NLP) suggest that looking
up and to the left is associated with lying. However, there is no research to
support this proposition. Not only does the research indicate that the direction
of the eye gaze has no meaning, averting eye gaze could be a clue to concen-
tration, could refl ect one ’ s attempt not to be infl uenced by the facial expression
of interviewees/peers, and/or could be associated with lying. Again, the
research is clear: there is no Pinocchio response indicative of deception (Ekman,
1992 ). That is, there is no particular physiological, physical or psychological
response that individuals demonstrate when they lie that they do not also
demonstrate when they are under stress and/or concentrating.
An error that refl ects experiential infl uences but tends to be common within
society, at least in Western culture, concerns the tendency to focus uncritically
on verbal information to the detriment of nonverbal information, which
appears to refl ect the overemphasis on language development. Indeed, while
children are known to be relatively profi cient in nonverbal communication,
adults – through socialization – have learned to focus more on the spoken
word. As a result, facial expressions of emotions are, for example, usually
ignored due to verbal overrides, particularly if the emotion displayed is at odds
with what is being said. This speaks to the importance of active listening and