🤖 AI Summary
To address three key challenges in open-world segmentation—insufficient late-stage feature fusion, suboptimal prompt query selection, and limited semantic coverage—this paper proposes Prompt-DINO, a text-guided universal image segmentation framework. Methodologically, it introduces (i) an early cross-modal feature fusion mechanism for deep visual–textual alignment; (ii) a sequentially aligned prompt query selection strategy to enhance object localization accuracy; and (iii) a generative data engine built upon the RAP model, producing large-scale, high-fidelity training data with 80.5% reduced label noise. Built on the DETR architecture, Prompt-DINO integrates multimodal prompt encoding and dual-path cross-validated data synthesis, enabling end-to-end training. On open-world detection benchmarks, it achieves state-of-the-art performance, significantly broadening semantic coverage while maintaining high accuracy and strong scalability.
📝 Abstract
Recent advancements in multimodal vision models have highlighted limitations in late-stage feature fusion and suboptimal query selection for hybrid prompts open-world segmentation, alongside constraints from caption-derived vocabularies. To address these challenges, we propose Prompt-DINO, a text-guided visual Prompt DINO framework featuring three key innovations. First, we introduce an early fusion mechanism that unifies text/visual prompts and backbone features at the initial encoding stage, enabling deeper cross-modal interactions to resolve semantic ambiguities. Second, we design order-aligned query selection for DETR-based architectures, explicitly optimizing the structural alignment between text and visual queries during decoding to enhance semantic-spatial consistency. Third, we develop a generative data engine powered by the Recognize Anything via Prompting (RAP) model, which synthesizes 0.5B diverse training instances through a dual-path cross-verification pipeline, reducing label noise by 80.5% compared to conventional approaches. Extensive experiments demonstrate that Prompt-DINO achieves state-of-the-art performance on open-world detection benchmarks while significantly expanding semantic coverage beyond fixed-vocabulary constraints. Our work establishes a new paradigm for scalable multimodal detection and data generation in open-world scenarios. Data&Code are available at https://github.com/WeChatCV/WeVisionOne.