🤖 AI Summary
To address insufficient multimodal discriminability caused by coarse-grained vision-language alignment in referring segmentation of remote sensing images, this paper proposes a fine-grained vision-language alignment framework. It decomposes referring expressions into object descriptions and spatial descriptions, enabling collaborative modeling of fine-grained semantic correspondences across vision and language modalities. Key contributions include: (i) the first Fine-grained Image-Text Alignment Module (FIAM), which achieves hierarchical semantic alignment; (ii) a Text-aware Multi-scale Enhancement Module (TMEM) that enables text-guided, adaptive cross-scale feature fusion; and (iii) a hybrid CNN-Transformer architecture to jointly enhance local detail capture and global contextual reasoning. Extensive experiments demonstrate state-of-the-art performance on two benchmark datasets—RefSegRS and RRSIS-D—outperforming existing methods by significant margins. The source code is publicly available.
📝 Abstract
Given a language expression, referring remote sensing image segmentation (RRSIS) aims to identify ground objects and assign pixel-wise labels within the imagery. The one of key challenges for this task is to capture discriminative multi-modal features via text-image alignment. However, the existing RRSIS methods use one vanilla and coarse alignment, where the language expression is directly extracted to be fused with the visual features. In this paper, we argue that a ``fine-grained image-text alignment'' can improve the extraction of multi-modal information. To this point, we propose a new referring remote sensing image segmentation method to fully exploit the visual and linguistic representations. Specifically, the original referring expression is regarded as context text, which is further decoupled into the ground object and spatial position texts. The proposed fine-grained image-text alignment module (FIAM) would simultaneously leverage the features of the input image and the corresponding texts, obtaining better discriminative multi-modal representation. Meanwhile, to handle the various scales of ground objects in remote sensing, we introduce a Text-aware Multi-scale Enhancement Module (TMEM) to adaptively perform cross-scale fusion and intersections. We evaluate the effectiveness of the proposed method on two public referring remote sensing datasets including RefSegRS and RRSIS-D, and our method obtains superior performance over several state-of-the-art methods. The code will be publicly available at https://github.com/Shaosifan/FIANet.