The Secret Life of Pronouns: What Our Words Say About Us
Page 16
Ekman’s group made videotapes of the interviews and ultimately showed the tapes to a wide array of people, including psychologists, local and state law enforcement personnel, and high-level federal officers with training in interrogation, and asked them to distinguish those who were lying from those being truthful. Overall, the accuracy rates ranged from 51 to 73 percent accurate, where 50 percent was chance.
After hearing Ekman’s presentation, I asked if he would be willing to share the transcripts of the interviews so that I could subject them to our computer program. Our arrangement was that he would send the transcripts but not tell us who was truthful. I would then send back a list of my conclusions about who were the liars and who were the truth tellers. A few weeks later I had analyzed his data and made my determinations. His co-author Maureen O’Sullivan responded almost immediately saying that I had done an amazing job. With this small sample, the computer accurately predicted between 65 and 75 percent.
The Ekman project revealed that pretty much the same group of words were related to deception as the other studies found. That is, those who were honest in their discussions with Ekman used more and bigger words, had longer and more complex sentences, and expressed less positive emotion than did the liars. And, as before, the truth-tellers relied on more I-words.
Sweating It Out After Committing a “Crime” The Ekman project required people to try to deceive someone else in a face-to-face interaction. In a sense, it was a test of wills concerning the students’ beliefs about a particular topic. The students hadn’t done anything wrong nor had they behaved in a way that called into question their basic honesty. A slightly edgier method to study deception in the laboratory is to actually induce people to engage in a questionable behavior and then, with their permission, lie to an interrogator about what they have done. One standard technique to accomplish this is called the “mock crime.” The idea is that participants agree to “steal” something—usually money—and then when “caught,” they agree to lie to a researcher who doesn’t know if they stole the money or not.
Working with Matt Newman a few years ago, we did such a study. Students who had signed up for an experiment were first met by Matt, who explained that they would be sent to a room for several minutes. Once seated in the room, they were to look in a book by their chair and go to page 160. If there was any money on that page, they should steal it and then put the book back. Later, they were informed, someone would enter the room and ask if they took the money. They were to deny taking the money. Everyone agreed to the rules.
Once in the room, half of the students found the money (a single dollar bill) and for the other half, no money existed. Another experimenter then came in, looked at page 160, and said, “There’s no money here, did you take it?” All said no. The experimenter then announced that they would be taken to another room and interrogated to determine if they were telling the truth. The interrogation was fairly minor and simply asked students to say in detail what they did when they entered the room. The transcripts of the students’ statements were later computer analyzed, and as with our other projects, we did much better than chance at catching the liars.
The mock-crime study and the various attitude studies all found similar effects: There are reliable “tells” in language that provide clues to deception. Soon afterward, several labs began testing the language-deception link. Judee Burgoon, one of the most respected researchers in the field of communication, conducted a striking number of experiments demonstrating that different types of deception—especially deception in natural interactions—have their own language fingerprints. She has repeatedly shown that lab-based deception studies generalize to groups other than college students. Gary Bond and his colleagues have found similar language effects with deception tasks among men and women prisoners across different prisons in the United States.
Although these studies are impressive, a recurring criticism of the various deception projects has been that virtually all are based on highly contrived laboratory studies. In fact, most of the studies are remarkably similar to parlor games. At the very worst, if any of the participants had been “caught” in these studies, they would have lost a few dollars—probably the equivalent of a single hand in a moderate-stakes poker game. What about language markers of deception when the stakes are real and potentially life changing?
CATCHING DECEPTION WHEN IT MATTERS: AVOIDING PRISON, HEARTBREAK, AND WAR
The advantage of running controlled experiments is that the researchers can get a nice clean picture of what causes what. Conducting real-world projects with life-and-death consequences is far messier. Researchers generally have no control over the situation and it is often hard to find situations where you know with certainty that people are lying and others where they are telling the truth.
LYING ON THE WITNESS STAND: PERJURY AND EXONERATION
After publishing some of our deception studies, I received one of the most interesting graduate school applications I’ve ever seen. The applicant, Denise Huddle, had run her own successful private investigation firm for the previous twenty-one years. She was ready to retire and felt she needed to go to graduate school to get the knowledge to build a foolproof lie-detection system based on language analysis. Everything in her application pointed to the fact that she was brilliantly smart and fiercely tenacious. We soon met and agreed that graduate school was not the way to go. Instead, I would work with her in developing a more real-world-tested language lie detector.
Denise’s idea was to find a real-world analog of the mock-crime study Matt Newman and I had conducted. Having spent thousands of hours in courthouses, Denise had watched hundreds of people testify in trials—many of whom were lying. Over several weeks, she and I hatched an imaginative study. We (meaning Denise) would track down the court transcripts of a large number of people who had been convicted of a major crime and who had clearly lied on the witness stand. In certain cases in the United States justice system, defendants who have been convicted of a major crime can also be subsequently convicted of perjury. The perjury conviction is usually the result of overwhelming evidence that the defendant lied on the stand. (Note to future criminals: If you are on trial for a crime you have committed and there is very strong evidence against you, do not lie. Just say, “I refuse to answer that question.” You’ll thank me for this advice.)
We also needed a separate group of people who clearly did not lie on the witness stand but who were found guilty anyway. Fortunately, Denise was able to track down eleven people who were convicted of a crime but were later exonerated because of DNA or other overwhelming evidence. This was an important comparison group because the exonerated sample was made up of people who were clearly poor truth-tellers.
This certainly sounded like a simple project from my perspective. Just get the public records, run them through the computer, and bam! Fame and glory was right around the bend. As often happens, it wasn’t as simple as I had imagined.
Denise spent almost a year tracking down the eleven exonerated cases as well as the thirty-five people convicted of felonies. To qualify for the study, people had to have testified on the witness stand and the full records had to be intact. Denise had a small trailer that she would drive to federal court archive facilities. Every day, she would use the court’s copy machines to make copies of the courtroom records—which were sometimes hundreds of pages in length. At night, she would return to her trailer and then scan the copies into her computer. She would usually spend two weeks at a time in the trailer before taking a few days off. On returning to her home, she spent additional time poring over the court transcripts, pulling out sections that were pivotal to the juries’ decisions and those sections that were uncontested.
Denise’s work eventually paid off. Although the number of cases was somewhat small, the effects were meaningful. Most striking were the differences in pronoun use. As with most of the other studies, the exonerated defendants used first-person singular pronouns at much higher rates than those found guilty of a
felony and perjury. I-words (primarily just the single word I) signaled innocence. Interestingly, the truly guilty defendants used third-person pronouns at elevated rates. They were trying to shift the blame away from themselves onto others. Also, as with many of the earlier projects, the truth-tellers used bigger words, described events in greater detail, and evidenced more complex thinking.
The pattern and strength of the effects were remarkable. I was thrilled but Denise was disappointed. The computer correctly classified 76 percent of the cases, where 50 percent is chance. This is better than the juries did but was far below the 95 percent that Denise was hoping for. As of this writing, Denise remains optimistic that a language-based deception system is a realistic possibility. She might be right but if she could get a system that reliably identified deception 80 percent of the time, it would be a cosmic breakthrough.
DETECTING DECEPTION: A LESSON IN REALITY
Having been connected to parts of law enforcement for much of my career, I’ve been impressed by the number of ridiculous claims people have made about detecting deception. Researchers with sterling reputations who have spent years studying the biological correlates of deception (such as those measured by the polygraph), nonverbal indicators of deception, the language of deception, and now the brain activity of deception have always come away with the same conclusions: We have a system that works much better than chance. No system has ever been shown to reliably catch liars at rates much higher than 65 percent. And even those with hit rates in that neighborhood (including me) have done so in highly controlled and artificial circumstances.
Nevertheless, I have heard over and over again about specific individuals or companies that claim to have a system that can catch deception 95 percent of the time. This is not possible. This will never happen in any of our lifetimes.
It is interesting that polygraph evidence is not allowed in the courtroom. The polygraph is actually impressive because it can accurately identify guilty people at rates close to 60–65 percent. Eyewitness identification, which is allowed in the courtroom, is probably accurate at comparable rates to the polygraph.
It is time that we begin to think about scientific evidence in the courtroom. Specifically, it is all probabilistic. If polygraph, nonverbal, eyewitness, brain scan, and any other type of evidence can help classify the guilt or innocence of a witness it should be introduced in court. However, it should be introduced in a way that calibrates its accuracy to the jury. Each type of evidence is simply something else for the jury to weigh, knowing that there are problems with each type. Life is probabilistic—courtroom evidence is no different.
LYING FOR LOVE: EVALUATING HONESTY IN POTENTIAL DATING PARTNERS
Deciding on whether to go on a date with someone doesn’t have the same gravitas as deciding if someone should go to prison for the rest of their lives. Leaving the prison metaphor aside, online dating sites can determine who you may live with the rest of your life. Deception in selecting a date and possibly a mate for life is serious business. And word on the street is that people are sometimes deceptive in terms of what they say about themselves online.
Just ask Jeff Hancock. Hancock and his colleagues at Cornell University conducted a riveting study with eighty online daters in the New York City area. The people—half male and half female—were selected based on their profiles on one of four commercial dating sites. All included a picture; information about their weight, height, and age; and a written description of who they were along with their interests. After agreeing to participate, each visited the research lab, where they completed several questionnaires. Their pictures were taken, their driver’s licenses scanned, and their height and weight measured.
As you can see, Hancock was able to determine how deceptive the daters’ online information was now that he had independently validated age (from the driver’s license), height, and weight. He was also able to get a group of raters to compare the photo he took in the lab with the one displayed on the online posting. Men tended to lie about their height and women their weight. Both sexes posted pictures that were flattering compared to their lab pictures—although some people’s online pictures were much more flattering than others. From all the objective information he collected, Hancock calculated a deception index for each person.
The heart of the research was to determine if the word use in the online ads differed between the daters who were most deceptive and those most honest. Yes, there were differences and they were comparable to the other deception studies. Those who were most honest tended to say more, use bigger words and longer sentences, including fewer emotion words (especially positive emotions). The best general predictor of honesty was, not surprisingly, use of I-words.
Although the function words distinguished honest from deceptive online ads, there were also differences in content words. That is, people who were dishonest about their profiles tended to shift the focus of their self-description away from their sensitive topic. For example, women and men who lied about their weight were the least likely to mention anything about food, restaurants, or eating. Similarly, those whose pictures were the most deceptive tended to focus on topics of work and achievement in a way that built up their status and downplayed their physical appearance.
Do you really need a computer to assess how trustworthy a potential online match might be? Can’t we intuitively pick out the honest from the deceptive people by taking in all of the information that is available to us? Hancock’s research team paints a rather bleak picture. He solicited the help of about fifty students to rate each online profile for trustworthiness. The students’ estimates of honesty were no better than if you chose a trustworthy date by flipping a coin. One reason is that we tend to look at precisely the wrong language cues in guessing who is trustworthy. As raters of online ads, most of us assume that an upbeat, positive, simple, selfless, down-to-earth person is the most honest. And that, my friends, is why our “love detectors” are flawed instruments.
DECEIVING FOR WAR: THE LANGUAGE OF LEADERS PRIOR TO THE IRAQ WAR
Any student of history will undoubtedly look back on the relationship between the United States and Iraq after about 1950 and ask, “What were they thinking?” The “they” will refer to both countries. Perhaps the most puzzling turn of events in this relationship was the decision by the United States to invade Iraq in March 2003.
Immediately after the attacks on the World Trade Center and Pentagon on September 11, 2001, the administration of George W. Bush was convinced that Iraq may have played some role in the attacks. Without detailing the intricate history between the United States and Iraq, it should be noted that there was already some bad blood between the countries. Over the next year and a half, the Bush administration began raising more and more concerns about Iraq—including claims that it was harboring terrorists, building weapons of mass destruction, and planning for attacks on the West. With the benefit of hindsight, most of these concerns were unfounded. There were no terrorist training sites, no weapons of mass destruction, and no plans to attack anyone. Nevertheless, the increasing rhetoric about the danger of Iraq helped to propel the invasion and occupation of Iraq in March 2003.
The social dynamics of a democratic government starting a war are tremendously complex. During periods of high anxiety, rumors spread quickly and people are vulnerable to distorted information. Rumors and speculation, with enough repeating, slowly transform themselves into firmly held beliefs. The distinctions between deception and self-deception can quickly erode.
The Center for Public Integrity (CPI), an independent watchdog agency, is a nonprofit and nonpartisan organization that supports investigative journalism on a wide range of issues (www.publicintegrity.org). In the months and years after the Iraq invasion, CPI began to comb through all of the public statements by key administration officials about Iraq in the time between 9/11 and the war. Indeed, anyone can access the hundreds of statements from speeches, press conferences, op-ed pieces, and television and radio interviews. For the researcher,
the task is made even easier in that those portions of text that have been verified to be false are highlighted. It makes you think … A word count researcher could simply compare the words in the nonhighlighted sections with the highlighted sections.
In fact, the very same Jeff Hancock who studied deception in online dating analyzed the CPI data bank. The Cornell group compiled 532 statements that contained at least one objectively false claim along with an equal number of true claims from the same sources. The statements, which were made between September 11, 2001, and September 11, 2003, were from eight senior Bush administration insiders: Bush himself, Vice President Dick Cheney, Secretary of State Colin Powell, Secretary of Defense Donald Rumsfeld, National Security Adviser Condoleezza Rice, Deputy Secretary of Defense Paul Wolfowitz, and White House Press Secretaries Ari Fleischer and Scott McClellan.
An example of information from the CPI database is an interview of Vice President Cheney on CNN’s Late Edition on March 24, 2002. The interviewer, Wolf Blitzer, asked Cheney if he supported the United Nations sending weapons inspectors into Iraq to try to find any evidence of weapons of mass destruction. Cheney responded (note that the highlighted sections were identified by CPI as deceptive; those not highlighted are presumed to be truthful):
What we said, Wolf, if you go back and look at the record is, the issue’s not inspectors. The issue is that he has chemical weapons and he’s used them. The issue is that he’s developing and has biological weapons. The issue is that he’s pursuing nuclear weapons. It’s the weapons of mass destruction and what he’s already done with them. There’s a devastating story in this week’s New Yorker magazine on his use of chemical weapons against the Kurds of northern Iraq back in 1988; may have hit as many as 200 separate towns and villages. Killed upwards of 100,000 people, according to the article, if it’s to be believed.