๐ค AI Summary
Remote sensing diffusion foundation models suffer from training data redundancy, noise contamination, and severe class imbalance; existing methods neglect the distributional requirements of generative modeling and the intrinsic heterogeneity of remote sensing imagery. To address this, we propose a novel, training-free, two-stage scene-aware data pruning framework: first performing coarse filtering via local information entropy, then conducting hierarchical clustering and representative sampling guided by remote sensing scene classification benchmarks. This work establishes the first โtraining-agnostic + scene-awareโ pruning paradigm, effectively balancing fine-grained fidelity and global diversity. Under an aggressive 85% pruning ratio, our method significantly accelerates model convergence and improves generation quality. Extensive experiments demonstrate state-of-the-art performance across downstream tasks, including remote sensing image super-resolution and semantic image synthesis.
๐ Abstract
Diffusion-based remote sensing (RS) generative foundation models are cruial for downstream tasks. However, these models rely on large amounts of globally representative data, which often contain redundancy, noise, and class imbalance, reducing training efficiency and preventing convergence. Existing RS diffusion foundation models typically aggregate multiple classification datasets or apply simplistic deduplication, overlooking the distributional requirements of generation modeling and the heterogeneity of RS imagery. To address these limitations, we propose a training-free, two-stage data pruning approach that quickly select a high-quality subset under high pruning ratios, enabling a preliminary foundation model to converge rapidly and serve as a versatile backbone for generation, downstream fine-tuning, and other applications. Our method jointly considers local information content with global scene-level diversity and representativeness. First, an entropy-based criterion efficiently removes low-information samples. Next, leveraging RS scene classification datasets as reference benchmarks, we perform scene-aware clustering with stratified sampling to improve clustering effectiveness while reducing computational costs on large-scale unlabeled data. Finally, by balancing cluster-level uniformity and sample representativeness, the method enables fine-grained selection under high pruning ratios while preserving overall diversity and representativeness. Experiments show that, even after pruning 85% of the training data, our method significantly improves convergence and generation quality. Furthermore, diffusion foundation models trained with our method consistently achieve state-of-the-art performance across downstream tasks, including super-resolution and semantic image synthesis. This data pruning paradigm offers practical guidance for developing RS generative foundation models.