🤖 AI Summary
Visual-language models (VLMs) suffer from poor calibration during test-time prompt tuning due to modality misalignment—specifically, cross-modal dominance of a single feature dimension, which amplifies prediction sensitivity and degrades confidence calibration. Method: We identify this alignment bias along the dominant dimension as a primary source of miscalibration and propose a dimension-wise entropy maximization regularization. This encourages uniform distribution of textual features across all dimensions, mitigating representational mismatch between modalities. Integrated into a contrastive learning framework, our method jointly optimizes prompt parameters and feature dimension entropy at test time—without requiring training data—for dynamic, zero-shot calibration. Contribution/Results: Extensive experiments demonstrate significant improvements in calibration across diverse real-world scenarios: expected calibration error (ECE) is reduced by up to 38%, while confidence consistency and out-of-distribution robustness are substantially enhanced.
📝 Abstract
Test-time adaptation paradigm provides flexibility towards domain shifts by performing immediate adaptation on unlabeled target data from the source model. Vision-Language Models (VLMs) leverage their generalization capabilities for diverse downstream tasks, and test-time prompt tuning has emerged as a prominent solution for adapting VLMs. In this work, we explore contrastive VLMs and identify the modality gap caused by a single dominant feature dimension across modalities. We observe that the dominant dimensions in both text and image modalities exhibit high predictive sensitivity, and that constraining their influence can improve calibration error. Building on this insight, we propose dimensional entropy maximization that regularizes the distribution of textual features toward uniformity to mitigate the dependency of dominant dimensions. Our method alleviates the degradation of calibration performance in test-time prompt tuning, offering a simple yet effective solution to enhance the reliability of VLMs in real-world deployment scenarios.