SatFusion: A Unified Framework for Enhancing Satellite IoT Images via Multi-Temporal and Multi-Source Data Fusion

📅 2025-10-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing satellite Internet-of-Things (Sat-IoT) image enhancement methods struggle to jointly exploit temporal and spectral/spatial complementarity across multi-temporal and multi-source data: multi-image super-resolution (MISR) suffers from insufficient texture detail, while pan-sharpening is highly sensitive to registration errors and noise. This paper proposes SatFusion—the first deep learning framework unifying multi-temporal super-resolution and multi-source fusion. It introduces a dynamic spectral consistency refinement mechanism and a joint optimization strategy of deep feature alignment and adaptive fusion, significantly improving noise robustness and misregistration tolerance. Through synergistic design of three modules—multi-temporal enhancement, multi-source fusion, and fused-synthesis—and weighted multi-loss supervision, SatFusion achieves state-of-the-art performance on WorldStrat, WV3, QB, and GF2 datasets, delivering superior reconstruction quality, enhanced generalizability, and practical efficacy in complex scenarios.

Technology Category

Application Category

📝 Abstract
With the rapid advancement of the digital society, the proliferation of satellites in the Satellite Internet of Things (Sat-IoT) has led to the continuous accumulation of large-scale multi-temporal and multi-source images across diverse application scenarios. However, existing methods fail to fully exploit the complementary information embedded in both temporal and source dimensions. For example, Multi-Image Super-Resolution (MISR) enhances reconstruction quality by leveraging temporal complementarity across multiple observations, yet the limited fine-grained texture details in input images constrain its performance. Conversely, pansharpening integrates multi-source images by injecting high-frequency spatial information from panchromatic data, but typically relies on pre-interpolated low-resolution inputs and assumes noise-free alignment, making it highly sensitive to noise and misregistration. To address these issues, we propose SatFusion: A Unified Framework for Enhancing Satellite IoT Images via Multi-Temporal and Multi-Source Data Fusion. Specifically, SatFusion first employs a Multi-Temporal Image Fusion (MTIF) module to achieve deep feature alignment with the panchromatic image. Then, a Multi-Source Image Fusion (MSIF) module injects fine-grained texture information from the panchromatic data. Finally, a Fusion Composition module adaptively integrates the complementary advantages of both modalities while dynamically refining spectral consistency, supervised by a weighted combination of multiple loss functions. Extensive experiments on the WorldStrat, WV3, QB, and GF2 datasets demonstrate that SatFusion significantly improves fusion quality, robustness under challenging conditions, and generalizability to real-world Sat-IoT scenarios. The code is available at: https://github.com/dllgyufei/SatFusion.git.
Problem

Research questions and friction points this paper is trying to address.

Enhancing satellite images by fusing multi-temporal and multi-source data
Addressing limitations in existing super-resolution and pansharpening methods
Improving fusion quality and robustness for Satellite IoT applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

SatFusion integrates multi-temporal and multi-source satellite data
It aligns features and injects texture via dual fusion modules
The framework dynamically refines spectral consistency with adaptive composition
🔎 Similar Papers
No similar papers found.