🤖 AI Summary
To address the degradation in generalization and calibration of zero-shot vision-language models (e.g., CLIP) under test-time visual out-of-distribution (OOD) shifts, this paper proposes Test-time Noise Tuning (TNT), a label-free adaptation mechanism. TNT introduces learnable noise—optimized directly on the input image—as a new parameter to refine visual representations in single-sample settings. It is the first work to treat noise as a learnable variable during inference. To stabilize optimization, TNT enforces cross-view embedding distance consistency and integrates logits scaling with confidence-aware view selection. Evaluated on natural distribution shift benchmarks, TNT achieves an average accuracy gain of 7.38%; on cross-dataset OOD evaluation, it improves performance by 0.80%. These results demonstrate substantial gains in both OOD robustness and predictive reliability, without requiring labeled data or architectural modifications.
📝 Abstract
Recently, test-time adaptation has garnered attention as a method for tuning models without labeled data. The conventional modus operandi for adapting pre-trained vision-language models (VLMs) during test-time primarily focuses on tuning learnable prompts; however, this approach overlooks potential distribution shifts in the visual representations themselves. In this work, we address this limitation by introducing Test-Time Noise Tuning (TNT), a novel method for handling unpredictable shifts in the visual space. TNT leverages, for the first time, a noise adaptation strategy that optimizes learnable noise directly in the visual input space, enabling adaptive feature learning from a single test sample. We further introduce a novel approach for inter-view representation alignment by explicitly enforcing coherence in embedding distances, ensuring consistent feature representations across views. Combined with scaled logits and confident view selection at inference, TNT substantially enhances VLM generalization and calibration, achieving average gains of +7.38% on natural distributions benchmark and +0.80% on cross-dataset evaluations over zero-shot CLIP. These improvements lay a strong foundation for adaptive out-of-distribution handling.