Referring Remote Sensing Image Segmentation with Cross-view Semantics Interaction Network

📅 2025-08-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing Referring Remote Sensing Image Segmentation (RRSIS) methods rely on single-view, full-image inputs and saliency-biased mechanisms, limiting their ability to accurately segment small or ambiguous objects in remote sensing imagery. To address this, we propose CSINet, a Cross-View Semantic Interaction Network. CSINet features a dual-branch encoder that separately models distant- and close-view visual cues; introduces a Cross-View Window Attention (CVWin) module to jointly enhance local details and global semantics during encoding; and employs a Collaborative Dilated Attention Decoder (CDAD) that explicitly captures target orientation while fusing multi-scale features. Evaluated on multiple remote sensing benchmarks, CSINet achieves significant improvements in segmentation accuracy for small and ambiguous objects, while maintaining efficient inference. Our approach establishes a new paradigm for referring-expression-driven remote sensing image segmentation.

Technology Category

Application Category

📝 Abstract
Recently, Referring Remote Sensing Image Segmentation (RRSIS) has aroused wide attention. To handle drastic scale variation of remote targets, existing methods only use the full image as input and nest the saliency-preferring techniques of cross-scale information interaction into traditional single-view structure. Although effective for visually salient targets, they still struggle in handling tiny, ambiguous ones in lots of real scenarios. In this work, we instead propose a paralleled yet unified segmentation framework Cross-view Semantics Interaction Network (CSINet) to solve the limitations. Motivated by human behavior in observing targets of interest, the network orchestrates visual cues from remote and close distances to conduct synergistic prediction. In its every encoding stage, a Cross-View Window-attention module (CVWin) is utilized to supplement global and local semantics into close-view and remote-view branch features, finally promoting the unified representation of feature in every encoding stage. In addition, we develop a Collaboratively Dilated Attention enhanced Decoder (CDAD) to mine the orientation property of target and meanwhile integrate cross-view multiscale features. The proposed network seamlessly enhances the exploitation of global and local semantics, achieving significant improvements over others while maintaining satisfactory speed.
Problem

Research questions and friction points this paper is trying to address.

Handles drastic scale variation in remote sensing targets
Improves segmentation of tiny, ambiguous targets in real scenarios
Enhances global and local semantics integration for unified representation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cross-view Semantics Interaction Network for segmentation
Cross-View Window-attention module supplements global semantics
Collaboratively Dilated Attention Decoder integrates multiscale features
🔎 Similar Papers
No similar papers found.
J
Jiaxing Yang
School of Information and Communication Engineering, Dalian University of Technology, Dalian, China
Lihe Zhang
Lihe Zhang
Dalian University of Technology
H
Huchuan Lu
School of Information and Communication Engineering, Dalian University of Technology, Dalian, China