🤖 AI Summary
This work addresses the challenges of full-waveform inversion (FWI)—notably its high nonlinearity and ill-posedness—and the limited generalization of existing data-driven methods due to insufficient training data scale and geological diversity. To overcome these limitations, the authors propose a multidimensional co-scaling strategy that jointly optimizes model capacity, synthetic data diversity, and training procedures. For the first time, they train a billion-parameter neural network using only simple synthetic data, achieving strong generalization to unseen complex geological structures. The method attains state-of-the-art performance across six real-world benchmarks, including OpenFWI, Marmousi, and BP 2004, significantly improving the SSIM metric from 0.5844 to 0.7669 and effectively narrowing the generalization gap of data-driven FWI in realistic scenarios.
📝 Abstract
Full Waveform Inversion (FWI) is a highly nonlinear and ill-posed problem that aims to recover subsurface velocity maps from surface-recorded seismic waveforms data. Existing data-driven FWI typically uses small models, as available datasets have limited volume, geological diversity, and spatial extent, leading to substantial concerns about overfitting. Although they perform well on synthetic datasets, current methods fail to generalize to more realistic geological structures. In this work, we show that a model trained entirely on simulated and relatively simple data can generalize remarkably well to challenging and unseen geological benchmarks. We provide a working recipe that tames a billion-parameter model for FWI through coordinated scaling across three axes: model capacity, data diversity, and training strategy. Our model achieves state-of-the-art performance on OpenFWI and significantly narrows the generalization gap in data-driven FWI. Across six challenging geophysical benchmarks, including Marmousi, 2D SEG/EAGE Salt and Overthrust, 2004 BP, Sigsbee, and SEAM Phase I, it infers complex structures absent from the training set and delivers significant performance improvements (SSIM from 0.5844 to 0.7669). Overall, our results demonstrate that with an appropriate scaling strategy, large models trained on simple synthetic data can achieve substantial generalization to more complex and realistic geological structures.