posted on 2025-11-18, 11:20authored byKatie L.H. Gray, Josh P Davis, Carl Bunce, Eilidh Noyes, Kay RitchieKay Ritchie
<p dir="ltr">Generative adversarial networks (GANs) can create realistic synthetic faces, which have the potential to be used for nefarious purposes. The synthetic faces produced by GANs are difficult to detect and are often judged to be more realistic than real faces. Training programmes have been developed to improve human synthetic face detection accuracy, with mixed results. Here, we investigate synthetic face detection and discrimination in super-recognizers (SRs; who have exceptional face recognition skills), and typical-ability control participants. We also devised a training procedure which sought to highlight rendering artefacts. In two different experimental designs, we found that SRs (total N = 283) were better at detecting and discriminating synthetic faces than controls (total N = 381), where control participants were below chance without training. Trained SRs and controls had significantly better performance than those without training, and the magnitude of the training effect was similar in both groups. Our results suggest that SRs are using cues unrelated to rendering artefacts to detect and discriminate synthetic faces, and that an easily implementable training procedure increases their performance to above chance levels. These results have implications for real-world scenarios, where trained SRs' performance could be harnessed for synthetic face detection.</p>
History
School affiliated with
School of Psychology, Sport Science and Wellbeing (Research Outputs)