So if they open up their codebase we can expect the deepfakes community to use this system to refine their own system to make it harder to detect deepfakes?
(They're called Generative Adversarial Networks - look it up)
The rise of AI systems that can generate fake images and videos has spurred researchers in the US to develop a technique to sniff out these cyber-shams, also known as deepfakes. Generative Adversarial Networks (GANs) are commonly used for creative purposes. These neural networks have helped researchers create made-up data to …
If one of the things they are looking for is the lower resolution of the fakes, then it might be as simple as using ever higher resolution original images to feed into the faking software. Something I learned in my video editing days, if you have to use source material that has been through lossy compression, start with the highest resolution you can get. Better to not use source material compressed that way though.
"Did they test the final product with real deepfakes?"
Yes, see the paper. They tested it against DeepFake-generated videos including a fake one of Nic Cage as Harrison Ford from YouTube (fig 6). It correctly pointed out the Nic Cage one as fake.
As the article points out, it's not perfect as it's built from their carefully curated dataset, and needs to be tested against a much wider set of forged videos.
It correctly pointed out the Nic Cage one as fake.
I would be more interested on what does it say about the Moff Tarkin sequences and the Leya closing sequence from Rogue 1. These were produced using an identical method, but using proper high-res source images.
If it can detect Moff's appearances as fake we have a winner.
Biting the hand that feeds IT © 1998–2019