OW-CLIP: Data-Efficient Visual Supervision for Open-World Object Detection via Human-AI Collaboration

📅 2025-07-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Open-world object detection (OWOD) faces three key challenges: high annotation cost, feature overfitting to known classes, and inflexible model architectures. To address these, we propose a plug-and-play collaborative learning framework that requires no backbone modification. Our approach integrates multimodal prompt tuning with Crop-Smoothing—a novel feature smoothing technique—and leverages CLIP and large language models for dual-modal data refinement, including cross-modal similarity filtering and visualization-guided interactive annotation. This enables low-cost, high-quality unknown-class recognition and continual learning. Empirically, using only 3.8% self-generated annotations, our method achieves 89% of the performance of state-of-the-art (SOTA) approaches; under equal annotation budgets, it significantly outperforms existing methods. The framework substantially reduces annotation overhead while enhancing generalization across known and unknown categories.

Technology Category

Application Category

📝 Abstract
Open-world object detection (OWOD) extends traditional object detection to identifying both known and unknown object, necessitating continuous model adaptation as new annotations emerge. Current approaches face significant limitations: 1) data-hungry training due to reliance on a large number of crowdsourced annotations, 2) susceptibility to "partial feature overfitting," and 3) limited flexibility due to required model architecture modifications. To tackle these issues, we present OW-CLIP, a visual analytics system that provides curated data and enables data-efficient OWOD model incremental training. OW-CLIP implements plug-and-play multimodal prompt tuning tailored for OWOD settings and introduces a novel "Crop-Smoothing" technique to mitigate partial feature overfitting. To meet the data requirements for the training methodology, we propose dual-modal data refinement methods that leverage large language models and cross-modal similarity for data generation and filtering. Simultaneously, we develope a visualization interface that enables users to explore and deliver high-quality annotations: including class-specific visual feature phrases and fine-grained differentiated images. Quantitative evaluation demonstrates that OW-CLIP achieves competitive performance at 89% of state-of-the-art performance while requiring only 3.8% self-generated data, while outperforming SOTA approach when trained with equivalent data volumes. A case study shows the effectiveness of the developed method and the improved annotation quality of our visualization system.
Problem

Research questions and friction points this paper is trying to address.

Enables data-efficient open-world object detection with minimal annotations
Reduces partial feature overfitting via novel Crop-Smoothing technique
Leverages multimodal AI collaboration for scalable data refinement
Innovation

Methods, ideas, or system contributions that make the work stand out.

Plug-and-play multimodal prompt tuning
Crop-Smoothing technique for overfitting
Dual-modal data refinement methods
🔎 Similar Papers
No similar papers found.