🤖 AI Summary
Existing vision-language models suffer from overly generalized negative prompts that introduce intra-class semantic overlap or misleading cues, thereby degrading out-of-distribution (OOD) detection performance. To address this, we propose a positive–negative prompt supervision framework: leveraging large language models to generate category-relevant initial prompts, followed by a prompt optimization strategy that steers negative prompts to emphasize inter-class boundary features rather than broad non-in-distribution (non-ID) information; additionally, we construct a semantic graph structure to precisely propagate the optimized textual supervision to the visual branch. This enhances the multimodal discriminative capability of energy-based OOD detectors. Extensive experiments across CIFAR-100 and ImageNet-1K benchmarks—covering eight OOD datasets and five large language models—demonstrate that our method significantly outperforms existing state-of-the-art approaches.
📝 Abstract
Out-of-distribution (OOD) detection is committed to delineating the classification boundaries between in-distribution (ID) and OOD images. Recent advances in vision-language models (VLMs) have demonstrated remarkable OOD detection performance by integrating both visual and textual modalities. In this context, negative prompts are introduced to emphasize the dissimilarity between image features and prompt content. However, these prompts often include a broad range of non-ID features, which may result in suboptimal outcomes due to the capture of overlapping or misleading information. To address this issue, we propose Positive and Negative Prompt Supervision, which encourages negative prompts to capture inter-class features and transfers this semantic knowledge to the visual modality to enhance OOD detection performance. Our method begins with class-specific positive and negative prompts initialized by large language models (LLMs). These prompts are subsequently optimized, with positive prompts focusing on features within each class, while negative prompts highlight features around category boundaries. Additionally, a graph-based architecture is employed to aggregate semantic-aware supervision from the optimized prompt representations and propagate it to the visual branch, thereby enhancing the performance of the energy-based OOD detector. Extensive experiments on two benchmarks, CIFAR-100 and ImageNet-1K, across eight OOD datasets and five different LLMs, demonstrate that our method outperforms state-of-the-art baselines.