🤖 AI Summary
Vision-language models (VLMs) face two key reliability bottlenecks in test-time adaptation (TTA): (1) distorted entropy estimation under distribution shifts, leading to error accumulation in cache-based sample selection; and (2) rigid decision boundaries ill-suited for substantial downstream distribution changes. To address these, we propose ReTA—a robust TTA framework that introduces a consistency-aware entropy reweighting mechanism to enhance cache sample selection reliability, and constructs class-level text embeddings as multivariate Gaussian distributions to enable dynamic, calibrated decision boundaries. ReTA jointly leverages vision and language modalities for fully unsupervised adaptive inference. Evaluated across diverse real-world distribution shift benchmarks, ReTA consistently outperforms state-of-the-art TTA methods—particularly under extreme visual perturbations—achieving substantial gains in both prediction accuracy and stability. Our approach establishes a new paradigm for reliable zero-shot generalization of VLMs.
📝 Abstract
Vision-language models (VLMs) exhibit remarkable zero-shot capabilities but struggle with distribution shifts in downstream tasks when labeled data is unavailable, which has motivated the development of Test-Time Adaptation (TTA) to improve VLMs' performance during inference without annotations. Among various TTA approaches, cache-based methods show promise by preserving historical knowledge from low-entropy samples in a dynamic cache and fostering efficient adaptation. However, these methods face two critical reliability challenges: (1) entropy often becomes unreliable under distribution shifts, causing error accumulation in the cache and degradation in adaptation performance; (2) the final predictions may be unreliable due to inflexible decision boundaries that fail to accommodate large downstream shifts. To address these challenges, we propose a Reliable Test-time Adaptation (ReTA) method that integrates two complementary strategies to enhance reliability from two perspectives. First, to mitigate the unreliability of entropy as a sample selection criterion for cache construction, we introduce Consistency-aware Entropy Reweighting (CER), which incorporates consistency constraints to weight entropy during cache updating. While conventional approaches rely solely on low entropy for cache prioritization and risk introducing noise, our method leverages predictive consistency to maintain a high-quality cache and facilitate more robust adaptation. Second, we present Diversity-driven Distribution Calibration (DDC), which models class-wise text embeddings as multivariate Gaussian distributions, enabling adaptive decision boundaries for more accurate predictions across visually diverse content. Extensive experiments demonstrate that ReTA consistently outperforms state-of-the-art methods, particularly under challenging real-world distribution shifts.