🤖 AI Summary
To address insufficient utilization of unlabeled genuine face data and weak representation generalizability in face security tasks, this paper proposes FS-VFM—the first vision foundation model specifically designed for face security. Methodologically, it introduces three key innovations: (1) a hybrid self-supervised objective combining masked image modeling with instance discrimination, enhanced by a CRFR-P adaptive masking strategy and contrastive learning against real-face anchors; (2) a local–global collaborative self-distillation mechanism; and (3) a lightweight FS-Adapter for efficient task-specific adaptation. Evaluated on 11 public benchmarks across full-supervision, weak-supervision, and self-supervision settings—and under various ViT scales—FS-VFM consistently outperforms existing vision foundation models and state-of-the-art specialized methods. It demonstrates exceptional generalization and efficiency trade-offs in cross-dataset deepfake detection, cross-domain presentation attack detection, and diffusion-model-based forgery identification.
📝 Abstract
With abundant, unlabeled real faces, how can we learn robust and transferable facial representations to boost generalization across various face security tasks? We make the first attempt and propose FS-VFM, a scalable self-supervised pre-training framework, to learn fundamental representations of real face images. We introduce three learning objectives, namely 3C, that synergize masked image modeling (MIM) and instance discrimination (ID), empowering FS-VFM to encode both local patterns and global semantics of real faces. Specifically, we formulate various facial masking strategies for MIM and devise a simple yet effective CRFR-P masking, which explicitly prompts the model to pursue meaningful intra-region Consistency and challenging inter-region Coherency. We present a reliable self-distillation mechanism that seamlessly couples MIM with ID to establish underlying local-to-global Correspondence. After pre-training, vanilla vision transformers (ViTs) serve as universal Vision Foundation Models for downstream Face Security tasks: cross-dataset deepfake detection, cross-domain face anti-spoofing, and unseen diffusion facial forensics. To efficiently transfer the pre-trained FS-VFM, we further propose FS-Adapter, a lightweight plug-and-play bottleneck atop the frozen backbone with a novel real-anchor contrastive objective. Extensive experiments on 11 public benchmarks demonstrate that our FS-VFM consistently generalizes better than diverse VFMs, spanning natural and facial domains, fully, weakly, and self-supervised paradigms, small, base, and large ViT scales, and even outperforms SOTA task-specific methods, while FS-Adapter offers an excellent efficiency-performance trade-off. The code and models are available on https://fsfm-3c.github.io/fsvfm.html.