Facebook to Implement Software That Can Detect Fake Photos & Videos Via AI

Deepfakes are media files, including photos, video and audio footage, edited through the use of artificial intelligence (AI) that provide a high degree of realism and often cause confusion after being spread via social networking.
Sputnik

Computer programmers working for Facebook revealed on Wednesday that they have developed software which is able not only able to identify deepfakes, but also to figure out their origin, according to a publication on Facebook’s blog.

Facebook AI researchers Xi Yin and Tal Hassner along with a team of university computer science researchers said that their software can “undo” the deepfake files through the technology of reverse engineering. This allows them to detect face images and determine how and where they were created.

"This work will give researchers and practitioners tools to better investigate incidents of coordinated disinformation using deepfakes, as well as open up new directions for future research," scientists said in a press release published by the Michigan State University.

They explained that the process of making a deepfake usually changes the digital "fingerprint" of the file, leaving certain imperfections. The new software runs deepfakes through a network to search for these marks.

"In digital photography, fingerprints are used to identify the digital camera used to produce an image," the scientists said. "Similar to device fingerprints, image fingerprints are unique patterns left on images […] that can equally be used to identify the generative model that the image came from."

A similar software program, Video Authenticator, was presented by Microsoft late last year amid the latest turbulent US presidential elections. It could reveal deepfake files by analysing an image or video and detect manipulation that could be invisible to the naked eye.

Discuss