Adaptive Multi-prompt Contrastive Network for Few-shot Out-of-distribution Detection

📅 2025-06-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing the challenge of few-shot out-of-distribution (OOD) detection—where only a limited number of in-distribution (ID) samples are available and both inter-class and intra-class distribution discrepancies must be modeled—this paper proposes the Adaptive Multi-Prompt Contrastive Network (AMPCN). Built upon the CLIP architecture, AMPCN introduces learnable ID/OOD textual prompts to construct a joint vision-language representation space. It further incorporates a prompt-guided in/out-distribution separation module and a class-level dynamic thresholding mechanism, enabling fine-grained distribution awareness and adaptive decision boundary calibration. Evaluated on multiple standard benchmarks, AMPCN significantly outperforms existing state-of-the-art methods, demonstrating superior OOD detection accuracy and strong generalization capability under extreme data scarcity (e.g., as few as one or two ID samples per class).

Technology Category

Application Category

📝 Abstract
Out-of-distribution (OOD) detection attempts to distinguish outlier samples to prevent models trained on the in-distribution (ID) dataset from producing unavailable outputs. Most OOD detection methods require many IID samples for training, which seriously limits their real-world applications. To this end, we target a challenging setting: few-shot OOD detection, where {Only a few {em labeled ID} samples are available.} Therefore, few-shot OOD detection is much more challenging than the traditional OOD detection setting. Previous few-shot OOD detection works ignore the distinct diversity between different classes. In this paper, we propose a novel network: Adaptive Multi-prompt Contrastive Network (AMCN), which adapts the ID-OOD separation boundary by learning inter- and intra-class distribution. To compensate for the absence of OOD and scarcity of ID {em image samples}, we leverage CLIP, connecting text with images, engineering learnable ID and OOD {em textual prompts}. Specifically, we first generate adaptive prompts (learnable ID prompts, label-fixed OOD prompts and label-adaptive OOD prompts). Then, we generate an adaptive class boundary for each class by introducing a class-wise threshold. Finally, we propose a prompt-guided ID-OOD separation module to control the margin between ID and OOD prompts. Experimental results show that AMCN outperforms other state-of-the-art works.
Problem

Research questions and friction points this paper is trying to address.

Detects out-of-distribution samples with few labeled in-distribution samples
Addresses class diversity neglect in few-shot OOD detection methods
Uses adaptive prompts and CLIP to enhance ID-OOD separation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive Multi-prompt Contrastive Network (AMCN)
Leverages CLIP for text-image connection
Generates adaptive prompts and class boundaries
🔎 Similar Papers
No similar papers found.
X
Xiang Fang
Energy Research Institute @ NTU, Interdisciplinary Graduate Programme, Nanyang Technological University, Singapore
Arvind Easwaran
Arvind Easwaran
Nanyang Technological University (NTU)
Real-Time SystemsCyber-Physical SystemsEmbedded Systems
B
Blaise Genest
CNRS and CNRS@CREATE, IPAL IRL 2955, France and Singapore