Canadian clinical psychologist, best-selling author, and culture warrior Jordan Peterson has tweeted a YouTube video revealing the danger of deepfakes for current and future generations just days after an audio spoofing website replicating his voice was discovered online.
Aside from the fact that the video, posted on the YouTube channel The Thinkery, addressed the "NotJordanPeterson" neural network that can make the AI model say anything one wants in the professor's voice, it also took a look at some potentially disturbing consequences of the further development of such technology.
The vlogger behind the clip, "Deepfakes Will Destroy Our Information Ecosphere", found an article about "the world's top deepfake artist" who was working to solve a problem he had created in the first place, and suggested that the AI actually poses a threat to Western democracies.
"The way our democracies function relies on the fact that we can be sure that a piece of media is reliable, as in this is a video recording or an audio recording, and that this is why it's so nefarious when media outlets clip a piece of audio or slice it together, or like change camera and cut out a certain segment of what is being said. But what happens when they don't need to do that and they can just literally f*cking make up your statements for you?" the content creator said.
While his point of view has found much support on Twitter, many of Peterson's followers believe that the technology could bring a positive change, as people would start questioning everything they see or hear, thereby developing critical thinking skills:
Others couldn't resist getting the most out of the neural network and made their own deepfakes, for instance, one featuring a "never-before-heard" 2Pac-Peterson collab:
The US government has already expressed concern about how deepfakes could potentially be used to spread convincing fake news ahead of the 2020 presidential election: earlier this month, the House Intelligence Committee asked big tech, such as Facebook, Twitter, and Google, how they were going to tackle the threat of digital trickery. The companies said they were working on the problem, but did not go into details.