Trade-offs in Cross-Domain Generalization of Foundation Model Fine-Tuned for Biometric Applications

📅 2025-09-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study identifies a pervasive over-specialization problem in foundational vision-language models (e.g., CLIP) when fine-tuned for biometric tasks—including face recognition, presentation attack detection, and image tampering detection—leading to severe degradation in general visual task performance. Through systematic evaluation across 14 general-purpose vision datasets and multiple biometric benchmarks, we identify task complexity, classifier head design, and model capacity as key determinants of catastrophic forgetting; larger models exhibit greater retention of original generalization capability. We propose the “capacity-mitigates-over-specialization” mechanism and validate it using FRoundation-ViT-L: fine-tuning yields a 58.52% improvement in face recognition accuracy on IJB-C, yet ImageNetV2 accuracy drops to 51.63%—a substantial decline from CLIP’s original 69.84%. This trade-off empirically confirms the fundamental tension between task-specific specialization and cross-domain generalization induced by fine-tuning.

Technology Category

Application Category

📝 Abstract
Foundation models such as CLIP have demonstrated exceptional zero- and few-shot transfer capabilities across diverse vision tasks. However, when fine-tuned for highly specialized biometric tasks, face recognition (FR), morphing attack detection (MAD), and presentation attack detection (PAD), these models may suffer from over-specialization. Thus, they may lose one of their foundational strengths, cross-domain generalization. In this work, we systematically quantify these trade-offs by evaluating three instances of CLIP fine-tuned for FR, MAD, and PAD. We evaluate each adapted model as well as the original CLIP baseline on 14 general vision datasets under zero-shot and linear-probe protocols, alongside common FR, MAD, and PAD benchmarks. Our results indicate that fine-tuned models suffer from over-specialization, especially when fine-tuned for complex tasks of FR. Also, our results pointed out that task complexity and classification head design, multi-class (FR) vs. binary (MAD and PAD), correlate with the degree of catastrophic forgetting. The FRoundation model with the ViT-L backbone outperforms other approaches on the large-scale FR benchmark IJB-C, achieving an improvement of up to 58.52%. However, it experiences a substantial performance drop on ImageNetV2, reaching only 51.63% compared to 69.84% achieved by the baseline CLIP model. Moreover, the larger CLIP architecture consistently preserves more of the model's original generalization ability than the smaller variant, indicating that increased model capacity may help mitigate over-specialization.
Problem

Research questions and friction points this paper is trying to address.

Evaluating cross-domain generalization loss in fine-tuned CLIP models
Quantifying over-specialization trade-offs in biometric tasks
Assessing catastrophic forgetting in face recognition systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tuning CLIP for biometric tasks
Evaluating cross-domain generalization trade-offs
Larger model capacity reduces over-specialization
🔎 Similar Papers
No similar papers found.