AI ‘deepfake’ faces detected using astronomy methods
Analysing reflections of light in the eyes can help to determine an image’s authenticity
Researchers are turning to techniques from astronomy to help spot computer-generated ‘deepfake’ images — which can look identical to genuine photographs at first glance.
By analysing images of faces using methods that are usually used to survey distant galaxies, astronomers can measure how a person’s eyes reflect light, which can reveal telltale signs of image manipulation.
“It’s not a silver bullet, because we do have false positives and false negatives,” says Kevin Pimbblet, director of the Centre of Excellence for Data Science, Artificial Intelligence and Modelling at the University of Hull, UK, who presented the research at the UK Royal Astronomical Society’s National Astronomy Meeting on 15 July. “But this research provides a potential method, an important way forward, perhaps to add to the battery of tests that one can apply to try to figure out if an image is real or fake.”
Faked photos
Advances in artificial intelligence (AI) are making it increasingly difficult to tell the difference between genuine images, videos and audio and those that have been created by algorithms. Deepfakes substitute the features of one person or environment with another, and can make it seem as though individuals said or did things that they did not. Authorities warn that this technology can be weaponized and used to spread misinformation, for example during elections.
Authentic photographs should have “consistent physics”, Pimbblet explains, “so the reflections that you see in the left-hand eyeball should be very similar, although not necessarily identical, to the reflections seen in the right-hand eyeball”. The differences are subtle, so to detect them, the researchers turned to techniques designed to analyse light in astronomy images.
The work, which is not yet published, formed the basis of Adejumoke Owolabi’s master’s thesis. Owolabi, a data scientist at the University of Hull, UK, sourced real images from the Flickr-Faces-HQ Dataset and created fake faces using an image generator. Olwolabi then analysed the reflections of light sources in the eyes in the images using two astronomical measurements: the CAS system and the Gini index. The CAS system quantifies the concentration, asymmetry and smoothness of an object’s light distribution. For decades, the technique has allowed astronomers, including Pimbblet, to characterize the light of extragalactic stars. The Gini index measures the inequality of light distribution in images of galaxies.
By comparing the reflections in an individual’s eyeballs, Owolabi could correctly predict whether the image was a fake about 70% of the time. Ultimately, the researchers found that the Gini index was better than the CAS system at predicting whether an image had been manipulated.
Brant Robertson, an astrophysicist at the University of California, Santa Cruz, welcomes the research. “However, if you can calculate a metric that quantifies how realistic a deepfake image may appear, you can also train the AI model to produce even better deepfakes by optimizing that metric,” he warns.
Zhiwu Huang, an AI researcher at the University of Southampton, UK, says that his own research has not identified inconsistent light patterns in eyes in deepfake images. But “while the specific technique of using inconsistent reflections in eyeballs may not be broadly applicable, such techniques might be useful for analyzing subtle anomalies in lighting, shadows and reflections across different parts of an image”, he says. “Detecting inconsistencies in the physical properties of light could potentially complement existing methods and improve the overall accuracy of deepfake detection.”
doi: https://doi.org/10.1038/d41586-024-02364-y
This story originally appeared on: Nature - Author:Sarah Wild