🤖 AI Summary
This work proposes the first hyperspectral image simulation framework based on latent representation learning, addressing the high computational cost of traditional radiative transfer models and their difficulty in jointly modeling spatial-spectral information. By leveraging a pretrained variational autoencoder (VAE) and enabling interpolation in the latent space from physical parameters, the method supports efficient spectral- and spatial-spectral-level simulation through either one-step or two-step training strategies. Introducing latent generative modeling to hyperspectral simulation for the first time, the approach significantly outperforms conventional regression-based methods on both PROSAIL-simulated data and real Sentinel-3 OLCI imagery, achieving superior performance in reconstruction accuracy, spectral fidelity, and downstream biophysical parameter retrieval tasks.
📝 Abstract
Synthetic hyperspectral image (HSI) generation is essential for large-scale simulation, algorithm development, and mission design, yet traditional radiative transfer models remain computationally expensive and often limited to spectrum-level outputs. In this work, we propose a latent representation-based framework for hyperspectral emulation that learns a latent generative representation of hyperspectral data. The proposed approach supports both spectrum-level and spatial-spectral emulation and can be trained either in a direct one-step formulation or in a two-step strategy that couples variational autoencoder (VAE) pretraining with parameter-to-latent interpolation. Experiments on PROSAIL-simulated vegetation data and Sentinel-3 OLCI imagery demonstrate that the method outperforms classical regression-based emulators in reconstruction accuracy, spectral fidelity, and robustness to real-world spatial variability. We further show that emulated HSIs preserve performance in downstream biophysical parameter retrieval, highlighting the practical relevance of emulated data for remote sensing applications.