Sampling Bag of Views for Open-Vocabulary Object Detection

📅 2024-12-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing open-vocabulary object detection methods suffer from limited performance on unseen categories due to coarse-grained region embeddings with insufficient semantic alignment to vision-language models (VLMs), high contextual noise, and excessive computational overhead. To address these issues, we propose a concept-based sampling and alignment framework: it clusters context-aware semantics into “view bags” and introduces a scale-adaptive mechanism to dynamically refine intra-bag concept representations, enabling robust and efficient compositional structure modeling. Our method integrates Faster R-CNN with CLIP, achieving gains of +2.6 box AP50 and +0.5 mask AP on novel classes in COCO and LVIS benchmarks, respectively. Moreover, it reduces CLIP’s forward FLOPs by 80.3%, significantly outperforming state-of-the-art approaches. The core innovations lie in view-bag modeling and scale-adaptive alignment—jointly enhancing semantic fidelity and inference efficiency.

Technology Category

Application Category

📝 Abstract
Existing open-vocabulary object detection (OVD) develops methods for testing unseen categories by aligning object region embeddings with corresponding VLM features. A recent study leverages the idea that VLMs implicitly learn compositional structures of semantic concepts within the image. Instead of using an individual region embedding, it utilizes a bag of region embeddings as a new representation to incorporate compositional structures into the OVD task. However, this approach often fails to capture the contextual concepts of each region, leading to noisy compositional structures. This results in only marginal performance improvements and reduced efficiency. To address this, we propose a novel concept-based alignment method that samples a more powerful and efficient compositional structure. Our approach groups contextually related ``concepts'' into a bag and adjusts the scale of concepts within the bag for more effective embedding alignment. Combined with Faster R-CNN, our method achieves improvements of 2.6 box AP50 and 0.5 mask AP over prior work on novel categories in the open-vocabulary COCO and LVIS benchmarks. Furthermore, our method reduces CLIP computation in FLOPs by 80.3% compared to previous research, significantly enhancing efficiency. Experimental results demonstrate that the proposed method outperforms previous state-of-the-art models on the OVD datasets.
Problem

Research questions and friction points this paper is trying to address.

Object Detection
Unseen Objects
Computational Efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Concept Clustering
Visual Language Model Alignment
Efficient Open-Vocabulary Object Detection
🔎 Similar Papers
No similar papers found.