Activation Matters: Test-time Activated Negative Labels for OOD Detection with Vision-Language Models

πŸ“… 2026-03-26
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing out-of-distribution (OOD) detection methods often struggle to effectively identify anomalies due to insufficient activation of negative labels on OOD samples. To address this limitation, this work proposes Test-time Activation of Negative Labels (TANL), which introduces a novel mechanism that dynamically selects highly activated negative labels during inference. Specifically, TANL constructs an activation metric based on the label assignment probabilities from a vision-language model, adaptively chooses negative labels by leveraging both historical and current batch information, and employs an activation-aware scoring function. Notably, the method requires no additional training and achieves a significant reduction in FPR95β€”down to 9.8%β€”on large-scale benchmarks such as ImageNet, outperforming existing approaches while maintaining compatibility with diverse backbone architectures and task settings.

Technology Category

Application Category

πŸ“ Abstract
Out-of-distribution (OOD) detection aims to identify samples that deviate from in-distribution (ID). One popular pipeline addresses this by introducing negative labels distant from ID classes and detecting OOD based on their distance to these labels. However, such labels may present poor activation on OOD samples, failing to capture the OOD characteristics. To address this, we propose \underline{T}est-time \underline{A}ctivated \underline{N}egative \underline{L}abels (TANL) by dynamically evaluating activation levels across the corpus dataset and mining candidate labels with high activation responses during the testing process. Specifically, TANL identifies high-confidence test images online and accumulates their assignment probabilities over the corpus to construct a label activation metric. Such a metric leverages historical test samples to adaptively align with the test distribution, enabling the selection of distribution-adaptive activated negative labels. By further exploring the activation information within the current testing batch, we introduce a more fine-grained, batch-adaptive variant. To fully utilize label activation knowledge, we propose an activation-aware score function that emphasizes negative labels with stronger activations, boosting performance and enhancing its robustness to the label number. Our TANL is training-free, test-efficient, and grounded in theoretical justification. Experiments on diverse backbones and wide task settings validate its effectiveness. Notably, on the large-scale ImageNet benchmark, TANL significantly reduces the FPR95 from 17.5\% to 9.8\%. Codes are available at \href{https://github.com/YBZh/OpenOOD-VLM}{YBZh/OpenOOD-VLM}.
Problem

Research questions and friction points this paper is trying to address.

out-of-distribution detection
negative labels
activation
vision-language models
test-time adaptation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Test-time Activation
Negative Labels
OOD Detection
Vision-Language Models
Distribution Adaptation
πŸ”Ž Similar Papers
No similar papers found.