An Efficient Remote Sensing Super Resolution Method Exploring Diffusion Priors and Multi-Modal Constraints for Crop Type Mapping

📅 2025-10-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address three key challenges in remote sensing super-resolution (RSSR)—high training costs and slow inference of diffusion models, insufficient utilization of auxiliary physical information, and lack of downstream task validation—this paper proposes LSSR, a lightweight multimodal-guided framework. Methodologically, LSSR freezes the pre-trained Stable Diffusion backbone and incorporates cross-modal attention and adapter modules to fuse digital elevation models, land cover maps, and SAR data as physically grounded constraints; it further introduces a Fourier-domain NDVI loss to preserve vegetation index fidelity. Experiments demonstrate state-of-the-art performance: RGB PSNR/SSIM of 32.63/0.84, NDVI error of only 0.042, and inference time of 0.39 seconds per image. For downstream crop-type mapping, LSSR achieves an F1-score of 0.86—significantly surpassing native Sentinel-2 imagery. Notably, this work presents the first systematic evaluation of RSSR’s practical utility in real-world agricultural applications, jointly optimizing computational efficiency, reconstruction accuracy, and scientific interpretability.

Technology Category

Application Category

📝 Abstract
Super resolution offers a way to harness medium even lowresolution but historically valuable remote sensing image archives. Generative models, especially diffusion models, have recently been applied to remote sensing super resolution (RSSR), yet several challenges exist. First, diffusion models are effective but require expensive training from scratch resources and have slow inference speeds. Second, current methods have limited utilization of auxiliary information as real-world constraints to reconstruct scientifically realistic images. Finally, most current methods lack evaluation on downstream tasks. In this study, we present a efficient LSSR framework for RSSR, supported by a new multimodal dataset of paired 30 m Landsat 8 and 10 m Sentinel 2 imagery. Built on frozen pretrained Stable Diffusion, LSSR integrates crossmodal attention with auxiliary knowledge (Digital Elevation Model, land cover, month) and Synthetic Aperture Radar guidance, enhanced by adapters and a tailored Fourier NDVI loss to balance spatial details and spectral fidelity. Extensive experiments demonstrate that LSSR significantly improves crop boundary delineation and recovery, achieving state-of-the-art performance with Peak Signal-to-Noise Ratio/Structural Similarity Index Measure of 32.63/0.84 (RGB) and 23.99/0.78 (IR), and the lowest NDVI Mean Squared Error (0.042), while maintaining efficient inference (0.39 sec/image). Moreover, LSSR transfers effectively to NASA Harmonized Landsat and Sentinel (HLS) super resolution, yielding more reliable crop classification (F1: 0.86) than Sentinel-2 (F1: 0.85). These results highlight the potential of RSSR to advance precision agriculture.
Problem

Research questions and friction points this paper is trying to address.

Enhancing remote sensing image resolution using diffusion priors
Improving scientific realism through multi-modal auxiliary constraints
Optimizing crop type mapping accuracy via efficient super-resolution
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses pretrained Stable Diffusion with adapters
Integrates multimodal constraints via crossmodal attention
Employs Fourier NDVI loss for spectral fidelity
🔎 Similar Papers
No similar papers found.