Computer scientists at New York's University of Rochester released a statement Tuesday describing how researchers were able to use data science and an online crowdsourcing framework known as the Automated Dyadic Data Recorder (ADDR) to create their own program that would help to zero in on facial and verbal cues exhibited by someone who is lying.
Participants gained access to the ADDR program by logging onto Amazon Mechanical Turk, an internet marketplace that allows developers to build on artificial intelligence systems by matching people to tasks that computers are unable to complete. Once study participants joined the marketplace, they were either assigned to be a "describer" or an "interrogator."
Describers were given an image and told to memorize as many details as possible before being instructed to either lie or tell the truth about what they saw in the image to the interrogator. While the interrogator asked questions about the picture, the software analyzed and captured images of the person's facial reactions.
"A lot of times people tend to look a certain way or show some kind of facial expression when they're remembering things," Tay Sen, a PhD student involved in the study, said in the press release. "And when they are given a computational question, they have another kind of facial expression."
After a few weeks of having research participants answer a slew of questions truthfully and not, officials were able to gather some 1.3 million frames of facial expressions from 151 pairs of individuals, creating the largest existing database for the analysis of facial expressions. Sen calls the system "like Skype on steroids."
Later, using a machine learning program that can detect patterns in videos, researchers were able to determine that the Duchenne smile was the most common facial expression linked to liars. The Duchenne smile, also known as smizing, is a smile that extends to the muscles of the eye and creates crow's feet-like effects under the eyes.
Those who were being honest, however, didn't smile but contracted their eyes.
"When we went back and replayed the videos, we found that this often happened when people were trying to remember what was in an image," Sen said. "This showed they were concentrating and trying to recall honestly."
Though researchers hope the dataset can help to catch baddies trying to get through security checkpoints at airports and minimize instances of racial and ethnic profiling during screenings — because presumably officials could use software to actually detect liars, rather than falling prey to their own bias — they ultimately noted that more work needed to be done before true lie-detecting software could be implemented.
"In the end, we still want humans to make the final decision," Ehsan Hoque, an assistant professor taking part in the study, said. "But as they are interrogating, it is important to provide them with some objective metrics that they could use to further inform their decisions."