🤖 AI Summary
To address the challenges of cross-domain face presentation attack detection—namely, clients’ inability to share raw data, severe distribution shift between training and testing domains, and stringent requirements for privacy and model security—this paper proposes a source-free, prototype-driven adaptive framework. Methodologically, it introduces (1) geodesic mixup, an optimal-transport-based geometric interpolation technique that enables zero-shot or lightweight adaptation under frozen backbone parameters; and (2) a synergistic integration of prototype learning and unsupervised domain adaptation, allowing client-side fine-tuning using only a small set of unlabeled target-domain samples. Evaluated under cross-domain and cross-attack settings, the framework achieves a 19.17% reduction in Half Total Error Rate (HTER) and an 8.58% improvement in Area Under the Curve (AUC), significantly outperforming existing source-free approaches.
📝 Abstract
Developing a face anti-spoofing model that meets the security requirements of clients worldwide is challenging due to the domain gap between training datasets and diverse end-user test data. Moreover, for security and privacy reasons, it is undesirable for clients to share a large amount of their face data with service providers. In this work, we introduce a novel method in which the face anti-spoofing model can be adapted by the client itself to a target domain at test time using only a small sample of data while keeping model parameters and training data inaccessible to the client. Specifically, we develop a prototype-based base model and an optimal transport-guided adaptor that enables adaptation in either a lightweight training or training-free fashion, without updating base model's parameters. Furthermore, we propose geodesic mixup, an optimal transport-based synthesis method that generates augmented training data along the geodesic path between source prototypes and target data distribution. This allows training a lightweight classifier to effectively adapt to target-specific characteristics while retaining essential knowledge learned from the source domain. In cross-domain and cross-attack settings, compared with recent methods, our method achieves average relative improvements of 19.17% in HTER and 8.58% in AUC, respectively.