🤖 AI Summary
Addressing the challenge of few-shot out-of-distribution (OOD) detection—where only a limited number of in-distribution (ID) samples are available and both inter-class and intra-class distribution discrepancies must be modeled—this paper proposes the Adaptive Multi-Prompt Contrastive Network (AMPCN). Built upon the CLIP architecture, AMPCN introduces learnable ID/OOD textual prompts to construct a joint vision-language representation space. It further incorporates a prompt-guided in/out-distribution separation module and a class-level dynamic thresholding mechanism, enabling fine-grained distribution awareness and adaptive decision boundary calibration. Evaluated on multiple standard benchmarks, AMPCN significantly outperforms existing state-of-the-art methods, demonstrating superior OOD detection accuracy and strong generalization capability under extreme data scarcity (e.g., as few as one or two ID samples per class).
📝 Abstract
Out-of-distribution (OOD) detection attempts to distinguish outlier samples to prevent models trained on the in-distribution (ID) dataset from producing unavailable outputs. Most OOD detection methods require many IID samples for training, which seriously limits their real-world applications. To this end, we target a challenging setting: few-shot OOD detection, where {Only a few {em labeled ID} samples are available.} Therefore, few-shot OOD detection is much more challenging than the traditional OOD detection setting. Previous few-shot OOD detection works ignore the distinct diversity between different classes. In this paper, we propose a novel network: Adaptive Multi-prompt Contrastive Network (AMCN), which adapts the ID-OOD separation boundary by learning inter- and intra-class distribution. To compensate for the absence of OOD and scarcity of ID {em image samples}, we leverage CLIP, connecting text with images, engineering learnable ID and OOD {em textual prompts}. Specifically, we first generate adaptive prompts (learnable ID prompts, label-fixed OOD prompts and label-adaptive OOD prompts). Then, we generate an adaptive class boundary for each class by introducing a class-wise threshold. Finally, we propose a prompt-guided ID-OOD separation module to control the margin between ID and OOD prompts. Experimental results show that AMCN outperforms other state-of-the-art works.