About 96 percent of the so called deepfakes – videos that were manipulated or fabricated via the use of artificial intelligence software – that are being circulated online are pornographic, Wired reports citing research conducted by Deeptrace, a cybersecurity company that specialises in detecting “AI-generated synthetic videos”.
The company arrived at this conclusion by conducting a “kind of deepfake census during June and July”, coming across about 15,000 videos openly touted as deepfakes in the process, which appears to be twice the number as compared to the situation seven months earlier.
As the media outlet notes, these findings essentially show that rather than being employed as a tool for destabilising elections, as was previously feared, deepfakes are “mostly being used very differently,” sometimes becoming an instrument of harassment.
A Deeptrace researcher named Henry Ajder who was involved in this study, claimed that “there are deepfake forums where users discuss or request pornographic deepfakes of women they know, such as ex-girlfriends, wanting to see them edited into a pornographic clip.”
And while Ajder noted a video clip is unlikely to significantly affect the upcoming US presidential election, the Deeptrace report points at a growing awareness of deepfake technology which can “fuel political deception,” as the media outlet puts it.