Patch-GAN Transfer Learning with Reconstructive Models for Cloud Removal

πŸ“… 2025-01-09
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the problem of missing ground-object information in remote sensing imagery caused by cloud occlusion, this paper proposes a reconstruction-oriented transfer learning framework based on generative adversarial networks (GANs). Methodologically, we introduce masked autoencoders (MAEs) into cloud removal for the first time, leveraging their strong reconstruction priors to recover underlying scene structures beneath clouds; design a patch-based discriminator to enhance local texture fidelity assessment; and establish an end-to-end, reconstruction-driven transfer paradigm that jointly optimizes global structural consistency and local detail preservation. Evaluated on standard cloud removal benchmarks, our approach significantly outperforms existing GAN-based methods and achieves performance comparable to state-of-the-art non-GAN approaches. Quantitative and qualitative results demonstrate substantial improvements in both reconstruction accuracy of cloud-obscured ground objects and visual realism of restored imagery.

Technology Category

Application Category

πŸ“ Abstract
Cloud removal plays a crucial role in enhancing remote sensing image analysis, yet accurately reconstructing cloud-obscured regions remains a significant challenge. Recent advancements in generative models have made the generation of realistic images increasingly accessible, offering new opportunities for this task. Given the conceptual alignment between image generation and cloud removal tasks, generative models present a promising approach for addressing cloud removal in remote sensing. In this work, we propose a deep transfer learning approach built on a generative adversarial network (GAN) framework to explore the potential of the novel masked autoencoder (MAE) image reconstruction model in cloud removal. Due to the complexity of remote sensing imagery, we further propose using a patch-wise discriminator to determine whether each patch of the image is real or not. The proposed reconstructive transfer learning approach demonstrates significant improvements in cloud removal performance compared to other GAN-based methods. Additionally, whilst direct comparisons with some of the state-of-the-art cloud removal techniques are limited due to unclear details regarding their train/test data splits, the proposed model achieves competitive results based on available benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Cloud Removal
Remote Sensing Images
Image Analysis Accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Patch-GAN
Masked Autoencoder (MAE)
Cloud Removal
πŸ”Ž Similar Papers
No similar papers found.
W
Wanli Ma
School of Computer Science and Informatics, Cardiff University, Cardiff, CF24 4AG, UK.
O
Oktay Karakuş
School of Computer Science and Informatics, Cardiff University, Cardiff, CF24 4AG, UK.
Paul L. Rosin
Paul L. Rosin
School of Computer Science and Informatics, Cardiff University
Computer VisionImage ProcessingMesh ProcessingNon-Photorealistic RenderingStyle Transfer