Researchers are using data science and an online crowdsourcing platform called FlimFlam to create a screening system that can more accurately detect deception based on facial and verbal cues.
“Basically, our system is like Skype on steroids…”
They also hope to minimize instances of racial and ethnic profiling that TSA critics contend occur when passengers are pulled aside under the agency’s Screening of Passengers by Observation Techniques (SPOT) program.
“Basically, our system is like Skype on steroids,” says Tay Sen, a PhD student in the lab of Ehsan Hoque, an assistant professor of computer science at the University of Rochester.
Sen is lead author of two new papers accepted for major computing conferences hosted by the Institute of Electrical and Electronics Engineers (IEEE) and the Association for Computing Machinery (ACM). The papers describe the framework the lab has used to create the largest video deception dataset so far—and why some smiles are more deceitful than others.
Games, lies, and videotape
Here’s how FlimFlam works: Two people sign up on Amazon Mechanical Turk, the crowdsourcing internet marketplace that matches people to tasks that computers are currently unable to do. A video assigns one person to be the witness, the other to be the interrogator.
The witness then sees an image and is instructed to memorize as many of the details as possible. The computer instructs the witness to either lie or tell the truth about what they’ve just seen. The interrogator, who has not been privy to the instructions to the witness, then asks the witness a set of questions. They include routine questions, such as, “what did you wear yesterday?” and “what is 14 times 4?”
“A lot of times people tend to look a certain way or show some kind of facial expression when they’re remembering things,” Sen says. “And when they are given a computational question, they have another kind of facial expression.”
They are also questions that the witness would have no incentive to lie about and that provide a baseline of that individual’s “normal” responses when answering honestly.
And, of course, there are questions about the image itself, to which the witness gives either a truthful or dishonest response.
A separate video records the entire exchange for later analysis.
‘Duping delight’
An advantage of this crowdsourcing approach is that it allows researchers to tap into a far larger pool of research participants—and gather data far more quickly—than would occur if participants had to be brought into a lab, Hoque says. So far, the researchers have gathered 1.3 million frames of facial expressions from 151 pairs of individuals playing the game.
“We argue that it is not only the person who is lying that we need to learn from, but the people they are lying to.”
Data science is enabling the researchers to quickly analyze all that data in novel ways. For example, they used facial feature analysis software to identify which of 43 facial muscles participants used in a given frame, and to assign a numerical weight to each.
The researchers then fed the results into a supercomputer, using a machine learning technique called clustering, which looks for patterns without humans first assigning any predetermined labels or categories.
“It told us there were basically five kinds of smile-related ‘faces’ that people made when responding to questions,” Sen says. The one most frequently associated with lying was a high intensity version of the so-called Duchenne smile involving both cheek/eye and mouth muscles. This is consistent with the “Duping Delight” theory that “when you’re fooling someone, you tend to take delight in it,” Sen explains.
When we’re lying, our ‘hot spots’ kickstart suspicion
More puzzling was the discovery that honest witnesses would often contract their eyes, but not smile at all with their mouths.
“When we went back and replayed the videos, we found that this often happened when people were trying to remember what was in an image,” Sen says. “This showed they were concentrating and trying to recall honesty.”
Behind the smile
So will these findings tip off liars to simply change their facial expressions?
Not likely. The tell-tale strong Duchenne smile associated with lying involves “a cheek muscle you cannot control,” Hoque says. “It is involuntary.”
The researchers say they’ve only scratched the surface of potential findings from the data they’re collected.
Hoque, for example, is intrigued by the “face” often worn by not by the witnesses but the interrogators who successfully guessed that a witness was lying. It is a “polite” smile involving the mouth only, the so-called “Pan-Am” smile that the airline’s flight attendants were instructed to wear at all times, even when they were frustrated.
Older kids know lying isn’t always 100% bad
“We argue that it is not only the person who is lying that we need to learn from, but the people they are lying to. The interrogator can also unwittingly reveal a lot of information,” Hoque says. And that could have implications for training TSA officers.
“In the end, we still want humans to make the final decision,” Hoque says. “But as they are interrogating, it is important to provide them with some objective metrics that they could use to further inform their decisions.”
Source: University of Rochester