Now, a fresh breed of video and audio manipulation tools, facilitated by advances in artificial intelligence and computer graphics, allow for the creation of realistic looking footage of public figures seeming to say anything and everything one chooses. Several research teams are working on capturing different visual and audio elements of human behavior, to allow for contortion by devious pranksters.
For one, Face2Face — developed at Stanford University — can manipulate video footage of public figures to allow another person to put words in their mouths in real time, capturing the manipulator's facial expressions as they talk into a webcam and then morphing those movements directly onto the face of the star of the original video. The research team demonstrated their technology's capabilities with videos of former US President George W. Bush, and current US President Donald Trump.
Similar applications have existed for a while now, but Face2Face's unique selling point is it can synthesize voices, making the fictional statements in manipulated videos much more convincing — for example, a video depicting a politician giving a fabricated speech can be made to sound like the politician themselves. Canadian startup Lyrebird's comparable efforts are even more impressive — it can turn written text into an effective audiobook "read" by the rich and famous.
A University of Alabama group have likewise been working on voice impersonation. With a mere three-five minute voice sample, which can be culled live or from sources such as TV shows and YouTube video, jokers can emulate that voice. The digital jesters can then talk into a microphone, and convert it live so the words sound like they're being spoken by a particular individual — those inclined could even phone someone and pretend to be that individual.
UAB research finds automated voice imitation can fool humans and machines http://t.co/2965mShOEx pic.twitter.com/zW0mv4xblI
— UAB Arts & Sciences (@UAB_CAS) September 28, 2015
While such capabilities could similarly be used for the purposes of harmless fun, and will no doubt be employed to create hilarious YouTube videos and the like, the technology also has a potentially sinister application — it will drag fake news, and fake news creators, into a much more sophisticated realm. Moreover, it will enable faultless impersonation on a grand scale — allowing people to pose as someone else for the purposes of defamation, or other nefarious purposes.
As the timeless adage — seemingly rarely followed — goes, one shouldn't believe everything they read — and in years to come reflexive skepticism and doubt about everything one sees and hears may well be just as vital an order of the day. Such research groups may have entirely benign intentions, but the results could be extremely dangerous.
These morphing technologies are in their infancy, which predictably means they're far from perfect — particularly on the video side. For the time being at least the facial expressions in videos can appear distorted or unnatural, and the voices themselves somewhat robotic on occasion — still, the University of Alabama's cutting edge voice tech has already been able to fool humans and even con voice-based security systems used by banks and smartphones.
One need only consider the achievements of the University of Washington's Synthesizing Obama project, where researchers scraped the audio from a single speech by the former President and used it to animate his face in an entirely different video with incredible accuracy to see quite how convincing fakes can already be.
Given time, it's almost certain these whiz kids and/or others will be able to perfectly recreate the sound and appearance of a person, to the point humans find it incredibly difficult, if not impossible, to detect a fraud.
To do so truly accurately even now, viewers must meticulously check clips to see where it was allegedly filmed, check lights and shadows for anomalies and inconsistencies, and see whether other videos of the alleged event (if they exist) match what they're seeing, and much more. It's already not uncommon for mainstream media organizations to slip up and mistake fake video clips for the real deal — and social media users being conned happens even more frequently, albeit generally in not every enduring bursts.
A well-doctored video of UK Prime Minister Theresa May declaring war on Argentina, or similar, could be halfway round the world before its falsity was proven conclusively, creating an economic, political, diplomatic or even military blunder in the process.