🤖 AI Summary
This work addresses the challenge faced by current vision-language models in open-set test-time adaptation, where balancing robustness to in-distribution shifts and accurate rejection of out-of-distribution samples remains difficult, often leading to performance degradation or misclassification. To tackle this, we propose ProtoDCS, a novel prototype-based dual-check separation framework that replaces rigid thresholding with a soft dual-check mechanism grounded in Gaussian Mixture Models (GMMs), enabling probabilistic discrimination between in- and out-of-distribution data. Furthermore, we introduce an uncertainty-aware, evidence-driven adaptation strategy that integrates prototype-level parameter updates to enhance both robustness and efficiency. Extensive experiments demonstrate that our method significantly outperforms existing approaches on CIFAR-10/100-C and Tiny-ImageNet-C benchmarks, simultaneously improving accuracy on known classes and out-of-distribution detection performance.
📝 Abstract
Large-scale Vision-Language Models (VLMs) exhibit strong zero-shot recognition, yet their real-world deployment is challenged by distribution shifts. While Test-Time Adaptation (TTA) can mitigate this, existing VLM-based TTA methods operate under a closed-set assumption, failing in open-set scenarios where test streams contain both covariate-shifted in-distribution (csID) and out-of-distribution (csOOD) data. This leads to a critical difficulty: the model must discriminate unknown csOOD samples to avoid interference while simultaneously adapting to known csID classes for accuracy. Current open-set TTA (OSTTA) methods rely on hard thresholds for separation and entropy minimization for adaptation. These strategies are brittle, often misclassifying ambiguous csOOD samples and inducing overconfident predictions, and their parameter-update mechanism is computationally prohibitive for VLMs. To address these limitations, we propose Prototype-based Double-Check Separation (ProtoDCS), a robust framework for OSTTA that effectively separates csID and csOOD samples, enabling safe and efficient adaptation of VLMs to csID data. Our main contributions are: (1) a novel double-check separation mechanism employing probabilistic Gaussian Mixture Model (GMM) verification to replace brittle thresholding; and (2) an evidence-driven adaptation strategy utilizing uncertainty-aware loss and efficient prototype-level updates, mitigating overconfidence and reducing computational overhead. Extensive experiments on CIFAR-10/100-C and Tiny-ImageNet-C demonstrate that ProtoDCS achieves state-of-the-art performance, significantly boosting both known-class accuracy and OOD detection metrics. Code will be available at https://github.com/O-YangF/ProtoDCS.