🤖 AI Summary
To address insufficient discriminability between live and spoof features and poor cross-domain generalization in facial anti-spoofing (FAS), this paper proposes three novel training strategies to enhance the feature representation capability of the Learnable Descriptive Convolutional Vision Transformer (LDCformer): (1) dual-attention supervision, jointly optimizing region-level and channel-level attention maps; (2) self-challenging supervision, generating hard samples via adversarial data augmentation to improve robustness; and (3) transitional triplet mining, dynamically constructing cross-domain hard triplets to strengthen fine-grained discrimination. This work is the first to jointly achieve fine-grained modeling of local descriptive features and domain-generalization optimization in FAS. Extensive experiments on mainstream benchmarks—including OULU-NPU and CASIA-MFSD—demonstrate significant improvements over state-of-the-art methods, with substantial gains in both feature discriminability and cross-domain generalization performance.
📝 Abstract
Face anti-spoofing (FAS) heavily relies on identifying live/spoof discriminative features to counter face presentation attacks. Recently, we proposed LDCformer to successfully incorporate the Learnable Descriptive Convolution (LDC) into ViT, to model long-range dependency of locally descriptive features for FAS. In this paper, we propose three novel training strategies to effectively enhance the training of LDCformer to largely boost its feature characterization capability. The first strategy, dual-attention supervision, is developed to learn fine-grained liveness features guided by regional live/spoof attentions. The second strategy, self-challenging supervision, is designed to enhance the discriminability of the features by generating challenging training data. In addition, we propose a third training strategy, transitional triplet mining strategy, through narrowing the cross-domain gap while maintaining the transitional relationship between live and spoof features, to enlarge the domain-generalization capability of LDCformer. Extensive experiments show that LDCformer under joint supervision of the three novel training strategies outperforms previous methods.