DiffRIS: Enhancing Referring Remote Sensing Image Segmentation with Pre-trained Text-to-Image Diffusion Models

📅 2025-06-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address semantic ambiguity in remote sensing image referring segmentation—caused by scale variation, arbitrary object orientation, and top-down viewing angles—this paper proposes a cross-modal segmentation framework tailored for aerial imagery. The method bridges the domain gap between general vision-language models and remote sensing semantics via two key innovations: (1) a Context-Perceptive Adapter (CP-Adapter) that aligns generic multimodal representations with remote sensing–specific semantics; and (2) a Progressive Cross-Modal Reasoning Decoder (PCMRD), which jointly performs global context modeling, object-aware reasoning, and multi-scale feature interaction. Leveraging a pre-trained text-to-image diffusion model, the approach establishes a fine-grained cross-modal alignment mechanism. Evaluated on three benchmarks—RRSIS-D, RefSegRS, and RISBench—the method achieves state-of-the-art performance, significantly improving natural language–driven object localization and segmentation accuracy in complex aerial scenes. This advances practical applications such as disaster response and urban planning.

Technology Category

Application Category

📝 Abstract
Referring remote sensing image segmentation (RRSIS) enables the precise delineation of regions within remote sensing imagery through natural language descriptions, serving critical applications in disaster response, urban development, and environmental monitoring. Despite recent advances, current approaches face significant challenges in processing aerial imagery due to complex object characteristics including scale variations, diverse orientations, and semantic ambiguities inherent to the overhead perspective. To address these limitations, we propose DiffRIS, a novel framework that harnesses the semantic understanding capabilities of pre-trained text-to-image diffusion models for enhanced cross-modal alignment in RRSIS tasks. Our framework introduces two key innovations: a context perception adapter (CP-adapter) that dynamically refines linguistic features through global context modeling and object-aware reasoning, and a progressive cross-modal reasoning decoder (PCMRD) that iteratively aligns textual descriptions with visual regions for precise segmentation. The CP-adapter bridges the domain gap between general vision-language understanding and remote sensing applications, while PCMRD enables fine-grained semantic alignment through multi-scale feature interaction. Comprehensive experiments on three benchmark datasets-RRSIS-D, RefSegRS, and RISBench-demonstrate that DiffRIS consistently outperforms existing methods across all standard metrics, establishing a new state-of-the-art for RRSIS tasks. The significant performance improvements validate the effectiveness of leveraging pre-trained diffusion models for remote sensing applications through our proposed adaptive framework.
Problem

Research questions and friction points this paper is trying to address.

Enhance segmentation of remote sensing images using natural language descriptions
Address complex object characteristics like scale variations and semantic ambiguities
Improve cross-modal alignment between text and aerial imagery for precise segmentation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages pre-trained text-to-image diffusion models
Introduces context perception adapter for linguistic refinement
Uses progressive cross-modal decoder for precise alignment
Zhe Dong
Zhe Dong
Microsoft AI
Y
Yuzhe Sun
School of Electronics and Information Engineering, Harbin Institute of Technology, Harbin, 150001, China
T
Tianzhu Liu
School of Electronics and Information Engineering, Harbin Institute of Technology, Harbin, 150001, China
Yanfeng Gu
Yanfeng Gu
Professor of Electronics Engineering, Harbin Institute of Technology
image processingpattern recognitionmachine learning