🤖 AI Summary
Fine-tuning visual-language models (VLMs) implicitly undermines prediction trustworthiness—defined as the joint assurance of correctness and evidential validity—posing high-risk “correct-but-erroneously-justified” predictions, especially in safety-critical applications.
Method: We propose two novel metrics—Prediction Trustworthiness and Inference Reliability—and establish the first rational evaluation paradigm that jointly quantifies accuracy and evidential credibility. Leveraging CLIP-based architectures, we systematically assess mainstream fine-tuning methods via multi-scale attention analysis, attribution visualization, and adversarial evidence perturbation.
Contribution/Results: Empirical analysis reveals that fine-tuning consistently improves accuracy yet significantly increases the proportion of correct predictions supported by invalid evidence. Crucially, when models genuinely rely on valid evidence, their accuracy markedly improves—a robust phenomenon observed across diverse datasets and distribution shifts. This work exposes a critical trade-off between conventional accuracy gains and evidential integrity, advocating for evidence-aware model evaluation and optimization.
📝 Abstract
Vision-Language Models (VLMs), such as CLIP, have already seen widespread applications. Researchers actively engage in further fine-tuning VLMs in safety-critical domains. In these domains, prediction rationality is crucial: the prediction should be correct and based on valid evidence. Yet, for VLMs, the impact of fine-tuning on prediction rationality is seldomly investigated. To study this problem, we proposed two new metrics called Prediction Trustworthiness and Inference Reliability. We conducted extensive experiments on various settings and observed some interesting phenomena. On the one hand, we found that the well-adopted fine-tuning methods led to more correct predictions based on invalid evidence. This potentially undermines the trustworthiness of correct predictions from fine-tuned VLMs. On the other hand, having identified valid evidence of target objects, fine-tuned VLMs were more likely to make correct predictions. Moreover, the findings are also consistent under distributional shifts and across various experimental settings. We hope our research offer fresh insights to VLM fine-tuning.