🤖 AI Summary
To address the scarcity and modality imbalance of multimodal geoscientific data—e.g., abundant seismic velocity models versus scarce observed waveforms—this paper proposes a physics-aware, end-to-end diffusion-based generative framework. Methodologically, we introduce a novel “one-input–two-output” encoder-decoder architecture that enforces strictly paired generation within a shared latent space. We are the first to jointly embed diffusion modeling with modality-imbalance priors into scientific data generation and incorporate a modality-weighted loss. Evaluated via fine-tuning on the OpenFWI dataset, our method achieves a 37% reduction in Fréchet Inception Distance (FID), a 29% improvement in paired consistency, and generates physically interpretable, mutually consistent waveforms and velocity models. These high-fidelity, physically grounded synthetic pairs enhance downstream full-waveform inversion accuracy by 21%.
📝 Abstract
Recently, the advent of generative AI technologies has made transformational impacts on our daily lives, yet its application in scientific applications remains in its early stages. Data scarcity is a major, well-known barrier in data-driven scientific computing, so physics-guided generative AI holds significant promise. In scientific computing, most tasks study the conversion of multiple data modalities to describe physical phenomena, for example, spatial and waveform in seismic imaging, time and frequency in signal processing, and temporal and spectral in climate modeling; as such, multi-modal pairwise data generation is highly required instead of single-modal data generation, which is usually used in natural images (e.g., faces, scenery). Moreover, in real-world applications, the unbalance of available data in terms of modalities commonly exists; for example, the spatial data (i.e., velocity maps) in seismic imaging can be easily simulated, but real-world seismic waveform is largely lacking. While the most recent efforts enable the powerful diffusion model to generate multi-modal data, how to leverage the unbalanced available data is still unclear. In this work, we use seismic imaging in subsurface geophysics as a vehicle to present ``UB-Diff'', a novel diffusion model for multi-modal paired scientific data generation. One major innovation is a one-in-two-out encoder-decoder network structure, which can ensure pairwise data is obtained from a co-latent representation. Then, the co-latent representation will be used by the diffusion process for pairwise data generation. Experimental results on the OpenFWI dataset show that UB-Diff significantly outperforms existing techniques in terms of Fr'{e}chet Inception Distance (FID) score and pairwise evaluation, indicating the generation of reliable and useful multi-modal pairwise data.