🤖 AI Summary
Vision-language models (VLMs) suffer from insufficient visual concept representation during test-time adaptation (TTA) due to semantic ambiguity in textual class prototypes.
Method: We propose ProtoMM, a training-free multimodal prototype learning framework. Its core innovations are: (i) dynamic multimodal prototypes that jointly model class names and adaptively extracted visual particles as discrete distributions, continuously updated over the test stream; and (ii) optimal transport to quantify semantic distance between prototypes and test images, enabling high-precision matching and weighted fusion.
Results: ProtoMM achieves state-of-the-art performance across 15 zero-shot benchmarks—including ImageNet and its variants—outperforming prior methods by an average of +1.03% top-1 accuracy. It significantly enhances generalization under unknown or distribution-shifted scenarios, demonstrating robustness without any parameter updates or auxiliary training.
📝 Abstract
With the increasing attention to pre-trained vision-language models (VLMs), eg, CLIP, substantial efforts have been devoted to many downstream tasks, especially in test-time adaptation (TTA). However, previous works focus on learning prototypes only in the textual modality while overlooking the ambiguous semantics in class names. These ambiguities lead to textual prototypes that are insufficient to capture visual concepts, resulting in limited performance. To address this issue, we introduce extbf{ProtoMM}, a training-free framework that constructs multimodal prototypes to adapt VLMs during the test time. By viewing the prototype as a discrete distribution over the textual descriptions and visual particles, ProtoMM has the ability to combine the multimodal features for comprehensive prototype learning. More importantly, the visual particles are dynamically updated as the testing stream flows. This allows our multimodal prototypes to continually learn from the data, enhancing their generalizability in unseen scenarios. In addition, we quantify the importance of the prototypes and test images by formulating their semantic distance as an optimal transport problem. Extensive experiments on 15 zero-shot benchmarks demonstrate the effectiveness of our method, achieving a 1.03% average accuracy improvement over state-of-the-art methods on ImageNet and its variant datasets.