🤖 AI Summary
Existing Referring Remote Sensing Image Segmentation (RRSIS) methods rely on single-view, full-image inputs and saliency-biased mechanisms, limiting their ability to accurately segment small or ambiguous objects in remote sensing imagery. To address this, we propose CSINet, a Cross-View Semantic Interaction Network. CSINet features a dual-branch encoder that separately models distant- and close-view visual cues; introduces a Cross-View Window Attention (CVWin) module to jointly enhance local details and global semantics during encoding; and employs a Collaborative Dilated Attention Decoder (CDAD) that explicitly captures target orientation while fusing multi-scale features. Evaluated on multiple remote sensing benchmarks, CSINet achieves significant improvements in segmentation accuracy for small and ambiguous objects, while maintaining efficient inference. Our approach establishes a new paradigm for referring-expression-driven remote sensing image segmentation.
📝 Abstract
Recently, Referring Remote Sensing Image Segmentation (RRSIS) has aroused wide attention. To handle drastic scale variation of remote targets, existing methods only use the full image as input and nest the saliency-preferring techniques of cross-scale information interaction into traditional single-view structure. Although effective for visually salient targets, they still struggle in handling tiny, ambiguous ones in lots of real scenarios. In this work, we instead propose a paralleled yet unified segmentation framework Cross-view Semantics Interaction Network (CSINet) to solve the limitations. Motivated by human behavior in observing targets of interest, the network orchestrates visual cues from remote and close distances to conduct synergistic prediction. In its every encoding stage, a Cross-View Window-attention module (CVWin) is utilized to supplement global and local semantics into close-view and remote-view branch features, finally promoting the unified representation of feature in every encoding stage. In addition, we develop a Collaboratively Dilated Attention enhanced Decoder (CDAD) to mine the orientation property of target and meanwhile integrate cross-view multiscale features. The proposed network seamlessly enhances the exploitation of global and local semantics, achieving significant improvements over others while maintaining satisfactory speed.