Imagine Fake Video With Defence Minister Declaring War! Analysts Warn Deepfakes Could Create Chaos

While politicians, who are most impacted by the deepfake threat alongside celebrities, are trying to come up with legal means to prevent wrongdoers from using the technology, experts warn that the battle against such materials won’t be easy. At the same time, technologies for detecting the fakes seem to be lagging behind the ones that create them.
Sputnik

Investment and research are more directed at developing deepfake-generating tools than deepfake detection, EU tech policy analyst at the Centre for Data Innovation Eline Chivot warns. According to her, the result is a mismatch and a lack of tools needed to tackle the problem efficiently, with the technologies progressing fast and becoming more available to a broader range of actors.

She points out that human review is not a sufficient solution to stop deepfakes from spreading.

“Debunking disinformation is increasingly difficult, and deepfakes cannot be detected by other algorithms easily yet. As they get better, it becomes harder to tell if something is real or not. You can do some statistical analysis, but it takes time, for instance, it may take 24 hours before one realises that video was a fake one, and in the meantime, the video could have gone viral”, she says.

She notes that creating legislation to regulate the use of deepfakes, in general, could be misguided, pointing out that it requires a greater understanding by policymakers. She insists that policies aiming to attract more investments should be prioritised, as these could help develop new technologies.

“Partnerships should be developed with industry including social media companies, e.g., with university researchers, innovators, scientists, startups, etc. to build better manipulation detection and ensure these systems are integrated into online platforms”, she says.

Tech Platforms’ Assistance Needed

Her stance is echoed by tech entrepreneur Fernando Bruccoleri, who suggests that even though people will ultimately be responsible for determining what is real and what is not, tech platforms should make it easy for them. He points to the deepfake concept, saying that we will have a problem discerning the truth, as neither the genuine nor fake videos would be eligible to serve as evidence. He also agrees that we will need time to accept the legal changes needed to respond to deepfake challenges.

“I think it will not be as simple as it seems to be able to pass and legislate in the short term. Surely any platform will design tools to detect if a video is fake or not, as a counterpart”, he says.

No Legal Tool Against Creating Deepfakes

At the same time, the CEO of the video verification company Amber, Shamir Allibhai, whose firm specialises in detecting fakes, insists that regulating the creation of deepfakes is an impossible task. He says that it would be easier to tackle the distribution of such materials the same way that tools against so-called revenge-porn do.

“I think that if you wanted to, you can tackle it where you could legislatively say the social media networks should not allow deepfakes on their platforms. Potentially, I think there’s a number of statutes that already talk about content and social networks’ ability to have editorial oversight over them, but that might be one way to do this”, he says.

Although he points out that deepfakes could be used for good purposes, like creating movies with deceased film stars, he admits that the technology can be exploited to foment political turmoil. He notes that fake videos, as well as fake news, can be used to actually pull society “further and further apart”, saying that more content of this kind is coming.

“I think that’s real success of this fake content, and I think we are going to see significantly more of it in the run-up to the US presidential elections in 2020. I mean, the challenge is where free speech ends and where regulating this content begins; I think it’s a very sensitive and difficult line”, he warns.

International Chaos Possible

CEO of the cyber-security firm LGMS Fong Choong Fook goes further, warning that deepfakes of politicians, whose images can easily be faked as there is a lot of footage of them on the Internet, could lead to international chaos.

“Imagine there is a fake video widely spread over the Internet, where a defence minister is declaring war with another country. This could lead to international chaos”, he says, noting that another impact would be “the compromise of non-repudiation”.

At the same time, he warns that while in order to detect deepfake videos we still have to rely on our eyes, this natural tool is unable to identify any noticeable flaws in a well-trained deepfake video.

“Therefore, it is very difficult for a human fighting a machine in this situation. What if only a machine could defeat a machine?” the tech entrepreneur concludes.

He predicts that programming a machine to detect such videos is challenging, as deepfake tech uses a deep learning algorithm that is more sophisticated than machine learning.

“In deep learning, the user just needs to provide input data and does not need to provide guides to the machine. The machine will have the ability to learn, predict and assess the accuracy of the output. Therefore, the amount of input data required in deep learning could be ten or even a hundred times larger than machine learning”, he explains.
Discuss