Thanks to a new AI tool created by computer scientists at the University of Buffalo, we can now spot portrait-style deepfakes with 94% accuracy. How does the tool do this? By analyzing the patterns of light reflection seen on each of the photographed person’s corneas, which should look the same, not different.
Corneas have a mirror-like surface that should have a similar reflection shape on them caused by the lighting of the room or area they’re in. In real photos, the eyes will always have a near-identical reflection pattern. However, deepfake images—which are created by generative adversarial networks (GANs)—usually fail to accurately synthesize the resemblance and instead generate unique and inconsistent reflections on each cornea, sometimes even with mismatched locations.
The AI tool, then, maps out the face, scans the eyes, and analyzes the reflection in each eye. It then generates a similarity metric score that determines the likelihood of the image being an actual deepfake. The lower the score, the higher the possibility an image is a deepfake. The tool proved effective when scanning deepfakes on This Person Does Not Exist, a website filled with images of fake people using the StyleGAN2 architecture.
However, the scientists that created the tool did note it has some limitations, the primary of which is that it relies on there being a reflected light source visible in both eyes. If someone is winking or blinking, it likely won’t work; nor will it if the subject is partially turned and not looking directly at the camera, as it’s only proved successful on portrait images. Additionally, anyone proficient enough in Photoshop may be able to edit out these inconsistencies, which would likely render the AI tool useless.
Despite these limitations, the tool still marks a big step forward for this type of technology. It won’t bust sophisticated deepfakes any time soon, but it can spot simpler ones and lay the foundation for more powerful detection technology in the future to go alongside our current capabilities to detect audio and video deepfakes.
via The Next Web