Zero-Shot In-Distribution Detection in Multi-Object Settings Using Vision-Language Foundation Models

📅 2023-04-10
🏛️ arXiv.org
📈 Citations: 14
Influential: 4
📄 PDF
🤖 AI Summary
This paper introduces the first zero-shot in-distribution (ID) detection task, addressing the challenge of identifying ID objects within multi-object images under zero-shot conditions—even when anomalous out-of-distribution (OOD) objects are present. To overcome the weak discriminative capability of existing methods on rare or atypical ID images, we propose Global-Local Maximum Concept Matching (GL-MCM), a novel zero-shot framework built upon CLIP. GL-MCM jointly models global and region-level vision–language alignment features, enabling end-to-end ID/OOD discrimination without requiring any ID training samples. Extensive experiments demonstrate that our method significantly outperforms state-of-the-art approaches on both the COCO multi-object detection benchmark and the ImageNet single-object classification benchmark. The source code is publicly available.
📝 Abstract
Extracting in-distribution (ID) images from noisy images scraped from the Internet is an important preprocessing for constructing datasets, which has traditionally been done manually. Automating this preprocessing with deep learning techniques presents two key challenges. First, images should be collected using only the name of the ID class without training on the ID data. Second, as we can see why COCO was created, it is crucial to identify images containing not only ID objects but also both ID and out-of-distribution (OOD) objects as ID images to create robust recognizers. In this paper, we propose a novel problem setting called zero-shot in-distribution (ID) detection, where we identify images containing ID objects as ID images (even if they contain OOD objects), and images lacking ID objects as OOD images without any training. To solve this problem, we leverage the powerful zero-shot capability of CLIP and present a simple and effective approach, Global-Local Maximum Concept Matching (GL-MCM), based on both global and local visual-text alignments of CLIP features. Extensive experiments demonstrate that GL-MCM outperforms comparison methods on both multi-object datasets and single-object ImageNet benchmarks. The code will be available via https://github.com/AtsuMiyai/GL-MCM.
Problem

Research questions and friction points this paper is trying to address.

Zero-shot Anomaly Detection
Multiple Objects Recognition
Uncommon Image Identification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Zero-shot Anomaly Detection
Global Local Maximum Concept Matching (GL-MCM)
Multi-object Handling
🔎 Similar Papers
No similar papers found.