Towards Motion-aware Referring Image Segmentation

📅 2026-03-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the significant performance degradation of existing referring image segmentation (RIS) methods when handling motion-related textual descriptions. To tackle this issue, the authors propose a motion-aware semantic data augmentation strategy that requires no additional annotations and introduce a Multimodal Radial Contrastive Learning (MRaCL) framework, which enhances dynamic semantic understanding through fused vision-language embeddings. Additionally, they construct M-Bench—the first benchmark specifically designed for action-discriminative RIS—along with a dedicated test set. Experimental results demonstrate that the proposed approach substantially improves segmentation performance on motion-based queries across multiple state-of-the-art RIS models while maintaining competitive accuracy on appearance-based referring expressions.

Technology Category

Application Category

📝 Abstract
Referring Image Segmentation (RIS) requires identifying objects from images based on textual descriptions. We observe that existing methods significantly underperform on motion-related queries compared to appearance-based ones. To address this, we first introduce an efficient data augmentation scheme that extracts motion-centric phrases from original captions, exposing models to more motion expressions without additional annotations. Second, since the same object can be described differently depending on the context, we propose Multimodal Radial Contrastive Learning (MRaCL), performed on fused image-text embeddings rather than unimodal representations. For comprehensive evaluation, we introduce a new test split focusing on motion-centric queries, and introduce a new benchmark called M-Bench, where objects are distinguished primarily by actions. Extensive experiments show our method substantially improves performance on motion-centric queries across multiple RIS models, maintaining competitive results on appearance-based descriptions. Codes are available at https://github.com/snuviplab/MRaCL
Problem

Research questions and friction points this paper is trying to address.

Referring Image Segmentation
motion-aware
motion-centric queries
visual grounding
multimodal understanding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Motion-aware Referring Image Segmentation
Data Augmentation
Multimodal Radial Contrastive Learning
M-Bench
Motion-centric Queries
🔎 Similar Papers
No similar papers found.