People Can't Distinguish Deepfake From Real Videos Even If Warned in Advance, Study Says
11:42 GMT 15.01.2022 (Updated: 12:36 GMT 15.01.2022)
© Photo : PixabayTeenagers using digital devices. File.
© Photo : Pixabay
Subscribe
From Barack Obama insulting Donald Trump and Vladimir Putin speaking about division in the United States to Leonardo DiCaprio advertising an energy drink in the centre of Moscow – deepfakes have become so good over the years one wonders what would happen should someone decide to use this technology for malicious purposes.
People can't distinguish deepfakes from real videos, even if they are warned about their existence in advance, The Independent has reported, citing a study conducted by the University of Oxford and Brown University.
One group of participants watched five real videos, and another watched four real videos with one deepfake, after which viewers were asked to identify which one was not real.
The deepfake used in the study was developed by VFX artist Chris Ume, who made headlines with his Tom Cruise deepfakes, showing the Hollywood actor doing magic tricks, playing the guitar, and joking about former Soviet leader Mikhail Gorbachev.
One group of participants watched five real videos, and another watched four real videos with one deepfake, after which viewers were asked to identify which one was not real.
The deepfake used in the study was developed by VFX artist Chris Ume, who made headlines with his Tom Cruise deepfakes, showing the Hollywood actor doing magic tricks, playing the guitar, and joking about former Soviet leader Mikhail Gorbachev.
Of those who had been told in advance that there was a deepfake video, 20 percent of viewers identified the correct choice whereas only 10 percent of those who hadn't been forewarned. But even when told, 70 percent of participants couldn't distinguish a real video from a fake one.
"Individuals are no more likely to notice anything out of the ordinary when exposed to a deepfake video of neutral content", the researchers wrote.
Participants made the same errors, despite the fact that some of them were familiar with Tom Cruise's age and his use of social media. Scientists say the only characteristic which correlates with the ability to spot a deepfake is age – older participants did better at recognising the false video.
The study's findings show that deepfakes pose a danger to the value of video media. People will start losing trust in online videos, including authentic content, researchers write. Experts have long warned that deepfakes, created with the help of artificial intelligence and machine learning methods, may become an extremely powerful tool for individuals who want to spread misinformation.
The study's findings show that deepfakes pose a danger to the value of video media. People will start losing trust in online videos, including authentic content, researchers write. Experts have long warned that deepfakes, created with the help of artificial intelligence and machine learning methods, may become an extremely powerful tool for individuals who want to spread misinformation.
This is of particular concern in light of a recent study conducted by New York University and France's Universite Grenoble Alpes, which found that fake news attracted more notice on Facebook than true stories.
Another study showed that 15 percent of participants believed that a deepfake showing former US President Barack Obama calling his successor Donald Trump a "dips**t" was real, despite the fact that the content of the video was deemed "highly improbable".
Reports say deepfakes have already been used to extort money, with criminals creating videos of individuals engaged in sexual acts and then threatening to post them online if they are not paid.
Another study showed that 15 percent of participants believed that a deepfake showing former US President Barack Obama calling his successor Donald Trump a "dips**t" was real, despite the fact that the content of the video was deemed "highly improbable".
Reports say deepfakes have already been used to extort money, with criminals creating videos of individuals engaged in sexual acts and then threatening to post them online if they are not paid.