🤖 AI Summary
This work addresses the privacy risks associated with transmitting or storing facial images in client-server architectures, where existing privacy-preserving methods often introduce significant semantic distortions that impair the ability of deepfake detection models to discern subtle forensic traces. To overcome this limitation, the paper presents a novel framework that integrates high-fidelity image steganography directly into the detection pipeline, enabling forgery analysis within the steganographic domain. The proposed approach leverages Low-Frequency-Aware Decomposition (LFAD), Spatial-Frequency Differential Attention (SFDA), and Steganographic Domain Alignment (SDA) mechanisms to suppress carrier-induced interference while enhancing sensitivity to forgery artifacts. This design achieves high visual imperceptibility without compromising detection fidelity. Extensive experiments across seven benchmark datasets demonstrate that the method significantly outperforms existing privacy-preserving techniques in maintaining deepfake detection accuracy.
📝 Abstract
Most existing Face Forgery Detection (FFD) models assume access to raw face images. In practice, under a client-server framework, private facial data may be intercepted during transmission or leaked by untrusted servers. Previous privacy protection approaches, such as anonymization, encryption, or distortion, partly mitigate leakage but often introduce severe semantic distortion, making images appear obviously protected. This alerts attackers, provoking more aggressive strategies and turning the process into a cat-and-mouse game. Moreover, these methods heavily manipulate image contents, introducing degradation or artifacts that may confuse FFD models, which rely on extremely subtle forgery traces. Inspired by advances in image steganography, which enable high-fidelity hiding and recovery, we propose a Stega}nography-based Face Forgery Detection framework (StegaFFD) to protect privacy without raising suspicion. StegaFFD hides facial images within natural cover images and directly conducts forgery detection in the steganographic domain. However, the hidden forgery-specific features are extremely subtle and interfered with by cover semantics, posing significant challenges. To address this, we propose Low-Frequency-Aware Decomposition (LFAD) and Spatial-Frequency Differential Attention (SFDA), which suppress interference from low-frequency cover semantics and enhance hidden facial feature perception. Furthermore, we introduce Steganographic Domain Alignment (SDA) to align the representations of hidden faces with those of their raw counterparts, enhancing the model's ability to perceive subtle facial cues in the steganographic domain. Extensive experiments on seven FFD datasets demonstrate that StegaFFD achieves strong imperceptibility, avoids raising attackers' suspicion, and better preserves FFD accuracy compared to existing facial privacy protection methods.