🤖 AI Summary
To address the weak generalization capability and limited performance under low-data regimes in fingerprint presentation attack detection (PAD), this paper pioneers the integration of saliency-guided training into fingerprint liveness detection. We propose a multi-source saliency fusion paradigm: (1) constructing the first large-scale manually annotated fingerprint saliency dataset (800 images), and (2) jointly leveraging minutiae distribution maps, image quality maps, and autoencoder reconstruction error to generate pseudo-saliency maps—collectively guiding the model to focus on discriminative regions. Our method achieves state-of-the-art performance on LivDet-2021, significantly enhancing cross-domain generalization, especially under resource-constrained settings. All saliency annotations, source code, and trained models are publicly released.
📝 Abstract
Saliency-guided training, which directs model learning to important regions of images, has demonstrated generalization improvements across various biometric presentation attack detection (PAD) tasks. This paper presents its first application to fingerprint PAD. We conducted a 50-participant study to create a dataset of 800 human-annotated fingerprint perceptually-important maps, explored alongside algorithmically-generated"pseudosaliency,"including minutiae-based, image quality-based, and autoencoder-based saliency maps. Evaluating on the 2021 Fingerprint Liveness Detection Competition testing set, we explore various configurations within five distinct training scenarios to assess the impact of saliency-guided training on accuracy and generalization. Our findings demonstrate the effectiveness of saliency-guided training for fingerprint PAD in both limited and large data contexts, and we present a configuration capable of earning the first place on the LivDet-2021 benchmark. Our results highlight saliency-guided training's promise for increased model generalization capabilities, its effectiveness when data is limited, and its potential to scale to larger datasets in fingerprint PAD. All collected saliency data and trained models are released with the paper to support reproducible research.