🤖 AI Summary
Existing referring expression segmentation and grounding methods rely on single-sentence descriptions, failing to adequately capture visually rich object details and thus suffering from misidentification of similar objects. To address this, we propose a vision-enhanced latent expression generation framework. First, a subject allocation and visual concept injection module synthesizes multiple diverse, attribute-rich latent textual expressions from a single input sentence. Second, a positive-margin contrastive learning mechanism is introduced to model fine-grained distinctions while preserving semantic consistency. Finally, leveraging a shared-subject–distinct-attribute disentangled latent space, we jointly optimize cross-modal alignment and text–latent expression co-refinement. Our method achieves state-of-the-art performance on multiple referring expression segmentation and comprehension benchmarks, and significantly outperforms prior work on the generalized referring expression segmentation (GRES) task.
📝 Abstract
Visual grounding tasks, such as referring image segmentation (RIS) and referring expression comprehension (REC), aim to localize a target object based on a given textual description. The target object in an image can be described in multiple ways, reflecting diverse attributes such as color, position, and more. However, most existing methods rely on a single textual input, which captures only a fraction of the rich information available in the visual domain. This mismatch between rich visual details and sparse textual cues can lead to the misidentification of similar objects. To address this, we propose a novel visual grounding framework that leverages multiple latent expressions generated from a single textual input by incorporating complementary visual details absent from the original description. Specifically, we introduce subject distributor and visual concept injector modules to embed both shared-subject and distinct-attributes concepts into the latent representations, thereby capturing unique and target-specific visual cues. We also propose a positive-margin contrastive learning strategy to align all latent expressions with the original text while preserving subtle variations. Experimental results show that our method not only outperforms state-of-the-art RIS and REC approaches on multiple benchmarks but also achieves outstanding performance on the generalized referring expression segmentation (GRES) benchmark.