🤖 AI Summary
This work addresses the limited zero-shot prediction performance of vision-language models under distribution shifts, which primarily stems from the modality gap and visual redundancy noise. To tackle this, the authors propose a cross-modal semantic subspace alignment mechanism: principal component analysis is employed to extract dominant subspaces from both visual and textual representations, and these subspaces are aligned by minimizing the chordal distance, thereby bridging the modality gap. Concurrently, visual features are projected onto the task-relevant textual subspace to filter out irrelevant noise, enhancing the reliability of zero-shot predictions and guiding test-time adaptation. This approach uniquely unifies modality alignment and redundancy suppression within a single framework, achieving an average improvement of 2.24% across multiple benchmarks and model architectures, significantly outperforming existing test-time adaptation methods.
📝 Abstract
Vision-language models (VLMs), despite their extraordinary zero-shot capabilities, are vulnerable to distribution shifts. Test-time adaptation (TTA) emerges as a predominant strategy to adapt VLMs to unlabeled test data on the fly. However, existing TTA methods heavily rely on zero-shot predictions as pseudo-labels for self-training, which can be unreliable under distribution shifts and misguide adaptation due to two fundamental limitations. First (Modality Gap), distribution shifts induce gaps between visual and textual modalities, making cross-modal relations inaccurate. Second (Visual Nuisance), visual embeddings encode rich but task-irrelevant noise that often overwhelms task-specific semantics under distribution shifts. To address these limitations, we propose SubTTA, which aligns the semantic subspaces of both modalities to enhance zero-shot predictions to better guide the TTA process. To bridge the modality gap, SubTTA extracts the principal subspaces of both modalities and aligns the visual manifold to the textual semantic anchor by minimizing their chordal distance. To eliminate visual nuisance, SubTTA projects the aligned visual features onto the task-specific textual subspace, which filters out task-irrelevant noise by constraining visual embeddings within the valid semantic span, and standard TTA is further performed on the purified space to refine the decision boundaries. Extensive experiments on various benchmarks and VLM architectures demonstrate the effectiveness of SubTTA, yielding an average improvement of 2.24% over state-of-the-art TTA methods.