Beyond Accuracy: On the Effects of Fine-tuning Towards Vision-Language Model's Prediction Rationality

📅 2024-12-17
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Fine-tuning visual-language models (VLMs) implicitly undermines prediction trustworthiness—defined as the joint assurance of correctness and evidential validity—posing high-risk “correct-but-erroneously-justified” predictions, especially in safety-critical applications. Method: We propose two novel metrics—Prediction Trustworthiness and Inference Reliability—and establish the first rational evaluation paradigm that jointly quantifies accuracy and evidential credibility. Leveraging CLIP-based architectures, we systematically assess mainstream fine-tuning methods via multi-scale attention analysis, attribution visualization, and adversarial evidence perturbation. Contribution/Results: Empirical analysis reveals that fine-tuning consistently improves accuracy yet significantly increases the proportion of correct predictions supported by invalid evidence. Crucially, when models genuinely rely on valid evidence, their accuracy markedly improves—a robust phenomenon observed across diverse datasets and distribution shifts. This work exposes a critical trade-off between conventional accuracy gains and evidential integrity, advocating for evidence-aware model evaluation and optimization.

Technology Category

Application Category

📝 Abstract
Vision-Language Models (VLMs), such as CLIP, have already seen widespread applications. Researchers actively engage in further fine-tuning VLMs in safety-critical domains. In these domains, prediction rationality is crucial: the prediction should be correct and based on valid evidence. Yet, for VLMs, the impact of fine-tuning on prediction rationality is seldomly investigated. To study this problem, we proposed two new metrics called Prediction Trustworthiness and Inference Reliability. We conducted extensive experiments on various settings and observed some interesting phenomena. On the one hand, we found that the well-adopted fine-tuning methods led to more correct predictions based on invalid evidence. This potentially undermines the trustworthiness of correct predictions from fine-tuned VLMs. On the other hand, having identified valid evidence of target objects, fine-tuned VLMs were more likely to make correct predictions. Moreover, the findings are also consistent under distributional shifts and across various experimental settings. We hope our research offer fresh insights to VLM fine-tuning.
Problem

Research questions and friction points this paper is trying to address.

Investigating fine-tuning impact on vision-language model prediction rationality
Assessing prediction trustworthiness and inference reliability post-fine-tuning
Examining evidence validity changes in safety-critical domain fine-tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposed metrics Prediction Trustworthiness and Inference Reliability
Evaluated fine-tuning effects on prediction rationality
Identified evidence validity impact on correct predictions
🔎 Similar Papers
2024-08-29arXiv.orgCitations: 7
Q
Qitong Wang
DeepREAL Lab, Department of Computer & Information Sciences, University of Delaware
T
Tang Li
DeepREAL Lab, Department of Computer & Information Sciences, University of Delaware
Kien X. Nguyen
Kien X. Nguyen
University of Delaware
Machine Learning
X
Xi Peng
DeepREAL Lab, Department of Computer & Information Sciences, University of Delaware