🤖 AI Summary
This study identifies a pervasive over-specialization problem in foundational vision-language models (e.g., CLIP) when fine-tuned for biometric tasks—including face recognition, presentation attack detection, and image tampering detection—leading to severe degradation in general visual task performance. Through systematic evaluation across 14 general-purpose vision datasets and multiple biometric benchmarks, we identify task complexity, classifier head design, and model capacity as key determinants of catastrophic forgetting; larger models exhibit greater retention of original generalization capability. We propose the “capacity-mitigates-over-specialization” mechanism and validate it using FRoundation-ViT-L: fine-tuning yields a 58.52% improvement in face recognition accuracy on IJB-C, yet ImageNetV2 accuracy drops to 51.63%—a substantial decline from CLIP’s original 69.84%. This trade-off empirically confirms the fundamental tension between task-specific specialization and cross-domain generalization induced by fine-tuning.
📝 Abstract
Foundation models such as CLIP have demonstrated exceptional zero- and few-shot transfer capabilities across diverse vision tasks. However, when fine-tuned for highly specialized biometric tasks, face recognition (FR), morphing attack detection (MAD), and presentation attack detection (PAD), these models may suffer from over-specialization. Thus, they may lose one of their foundational strengths, cross-domain generalization. In this work, we systematically quantify these trade-offs by evaluating three instances of CLIP fine-tuned for FR, MAD, and PAD. We evaluate each adapted model as well as the original CLIP baseline on 14 general vision datasets under zero-shot and linear-probe protocols, alongside common FR, MAD, and PAD benchmarks. Our results indicate that fine-tuned models suffer from over-specialization, especially when fine-tuned for complex tasks of FR. Also, our results pointed out that task complexity and classification head design, multi-class (FR) vs. binary (MAD and PAD), correlate with the degree of catastrophic forgetting. The FRoundation model with the ViT-L backbone outperforms other approaches on the large-scale FR benchmark IJB-C, achieving an improvement of up to 58.52%. However, it experiences a substantial performance drop on ImageNetV2, reaching only 51.63% compared to 69.84% achieved by the baseline CLIP model. Moreover, the larger CLIP architecture consistently preserves more of the model's original generalization ability than the smaller variant, indicating that increased model capacity may help mitigate over-specialization.