🤖 AI Summary
This study systematically investigates, for the first time, the interference effect of social-media beauty filters on deepfake and face-swap detection performance. Focusing on widely deployed smoothing-based beauty filters under realistic conditions, we conduct controlled experiments on mainstream benchmark datasets—comparing detection accuracy before and after filter application—using state-of-the-art detectors. Results demonstrate that beauty enhancement significantly degrades detection performance, with average accuracy dropping by 12.7%, exposing a critical robustness deficiency in current methods under cosmetic manipulation. We thereby identify a previously overlooked “beautification vulnerability” in biometric security. To address this, we advocate for and motivate the development of beauty-filter-resilient detection models. Our findings provide both empirical evidence and a novel perspective for enhancing the real-world deployment security of trustworthy facial recognition systems.
📝 Abstract
Digital beautification through social media filters has become increasingly popular, raising concerns about the reliability of facial images and videos and the effectiveness of automated face analysis. This issue is particularly critical for digital manipulation detectors, systems aiming at distinguishing between genuine and manipulated data, especially in cases involving deepfakes and morphing attacks designed to deceive humans and automated facial recognition. This study examines whether beauty filters impact the performance of deepfake and morphing attack detectors. We perform a comprehensive analysis, evaluating multiple state-of-the-art detectors on benchmark datasets before and after applying various smoothing filters. Our findings reveal performance degradation, highlighting vulnerabilities introduced by facial enhancements and underscoring the need for robust detection models resilient to such alterations.