Recently released deepfake celebrity roundtable video have left most netizens in awe.
The 15-minute footage, published on the entertainment film website Collider’s YouTube channel last week, features deepfakes of world famous actors Tom Cruise, Robert Downey Jr., Jeff Goldblum and Ewan McGregor, as well as Star Wars filmmaker George Lucas.
They jokingly discuss a wide array of issues, right down to unreturned DVDs, film streaming and body odour.
Collider said, in turn, that it “used deepfake technology to bring together five living legends to discuss the streaming wars and the future of cinema”.
Many netizens described the deepfake roundtable as “hilarious” and “amazing”, but some begged to differ.
This is the moment when the creator needs to look at his invention very carefully. And then destroy it.
— BLEACHED DANIELLE (@DanRemnant626) November 15, 2019
Seriously, nothing good can come out of this.
Please stop doing this and encouraging people to use this technology.
— Joshua Nathan Strong (@Boopa1219) November 14, 2019
These deepfakes collider puts out are amazing.
— ENTERinSILENCE (@ENTERinSILENCE) November 14, 2019
— Michael Black (@AgentBigfoot) November 15, 2019
This was HILARIOUS.
— Ben (@SirBenKenobi) November 16, 2019
Looking back at the way they sold The Force Awakens is really weird. As if general audiences/typical Star Wars fans were film snobs and cared about the movie being shot on Digital or 35mm with real puppets.
This might be the weirdest "celebrity deepfake roundtable with Tom Cruise and Jeff Goldblum" that you will see today: https://t.co/R1o7Bxha1T
— PingPipe Internet (@PingPipe) November 18, 2019
Collider pic.twitter.com/xnLuw8b59r
— Obi-Frans Kenobi (@MikeFrans1986) November 15, 2019
One Twitter user, for example, asserted that “this is the moment when the creator needs to look at his invention very carefully and then destroy it”.
Danger of Deepfakes
The deepfake roundtable footage comes after a study conducted by the cybersecurity company Deeptrace revealed in October that about 96 percent of deepfakes being circulated online are pornographic.
For example, Senator Ben Sasse called for rules that would make it illegal for people to "maliciously" produce and distribute deepfakes, while lawmaker Yvette Clarke introduced a bill that would oblige the creators of AI-generated trickery to disclose that they were fabricated by including some kind of a watermark.
EU tech policy analyst at the Centre for Data Innovation Eline Chivot, for her part, warned that human review is not a sufficient solution to stop spread of deepfakes, adding, “debunking disinformation is increasingly difficult, and deepfakes cannot be detected by other algorithms easily yet; as they get better, it becomes harder to tell if something is real or not.”
Fong Choong Fook, CEO of the cyber-security firm LGMS, went even further, warning that deepfakes of politicians could lead to international chaos.
“Imagine there is a fake video widely spread over the Internet, where a defence minister is declaring war with another country. This could lead to international chaos,” he said, adding that another possible impact would be “the compromise of non-repudiation.”
He also warned that in order to detect deepfake videos, people will have to rely on their own eyes, which Fong said is a natural tool that is unable to identify any noticeable flaws in well-made deepfake footage.