New deepfake technology turns a single photo and audio file into a sung video portrait
Another day, another deepfake: but this time they can sing.
New research from Imperial College London and Samsung’s AI Research Center in the UK shows how a single photo and audio file can be used to generate a singing or talking video portrait. Like the previous deepfake programs we’ve seen, researchers are using machine learning to generate their output. And while the fakes are far from 100% realistic, the results are astounding given how little data is needed.
By combining this real clip of Albert Einstein speaking, for example, with a photo of the famous mathematician, you can quickly create a never-before-seen lecture:
Getting a little more wacky, why not have everyone’s favorite Mad Monk, Grigory Yefimovich Rasputin, singing BeyoncÃ©’s classic âHaloâ? What a karaoke night that would be.
Or how about a more realistic example: generating a video that not only matches the input audio, but is edited to communicate a specific emotion. Remember, all that was needed to create these clips was a single frame and an audio file. The algorithms did the rest.
As mentioned above, this work isn’t entirely realistic, but it’s the latest illustration of how quickly this technology is changing. The techniques for generating deepfakes are getting easier and easier every day, and although such research is not commercially available, it didn’t take long for the original deepfakers to bundle their techniques into one easy-to-use software. It will surely be the same with these new approaches.
Research like this naturally worries people about how it will be used for disinformation and propaganda – a question that currently annoys lawmakers in the United States. And while you can argue that such fears in the political realm are overblown, deepfakes have already caused true evil, especially for women, who have been targeted for creating embarrassing and shameful non-consensual pornography.
Blackmailing Rasputin BeyoncÃ© is just a slight relief at this point, but we don’t know how weird and terrible things could get in the future.