🤖 AI Summary
Existing visual grounding methods suffer from either low efficiency (two-stage paradigms) or weak supervision and coarse-grained discrimination (end-to-end approaches). To address these limitations, this paper proposes PropVG—the first end-to-end proposal-driven visual grounding framework. Its core innovation lies in the deep integration of foreground proposal generation and referring expression comprehension, enabling their joint optimization within a unified architecture. We introduce a contrastive reasoning strategy (CRS) module that performs sentence-level and token-level contrastive learning to strengthen vision-language alignment. Additionally, we design a multi-granularity target discrimination (MTD) module that explicitly models both object-level and semantic-level cues, enhancing robustness to missing or salient targets. Extensive experiments demonstrate that PropVG achieves significant improvements over state-of-the-art methods on RefCOCO, gRefCOCO, R-RefCOCO, and Ref-ZOOM benchmarks—particularly excelling in complex scenes with markedly higher localization accuracy.
📝 Abstract
Recent advances in visual grounding have largely shifted away from traditional proposal-based two-stage frameworks due to their inefficiency and high computational complexity, favoring end-to-end direct reference paradigms. However, these methods rely exclusively on the referred target for supervision, overlooking the potential benefits of prominent prospective targets. Moreover, existing approaches often fail to incorporate multi-granularity discrimination, which is crucial for robust object identification in complex scenarios. To address these limitations, we propose PropVG, an end-to-end proposal-based framework that, to the best of our knowledge, is the first to seamlessly integrate foreground object proposal generation with referential object comprehension without requiring additional detectors. Furthermore, we introduce a Contrastive-based Refer Scoring (CRS) module, which employs contrastive learning at both sentence and word levels to enhance the capability in understanding and distinguishing referred objects. Additionally, we design a Multi-granularity Target Discrimination (MTD) module that fuses object- and semantic-level information to improve the recognition of absent targets. Extensive experiments on gRefCOCO (GREC/GRES), Ref-ZOM, R-RefCOCO, and RefCOCO (REC/RES) benchmarks demonstrate the effectiveness of PropVG. The codes and models are available at https://github.com/Dmmm1997/PropVG.