🤖 AI Summary
Traditional simulation-based inference (SBI) suffers from high computational cost and low efficiency in high-dimensional parameter spaces due to its reliance on multiple sequential rounds of simulation and neural network training. To address this, we propose Dynamic SBI—a round-free, asynchronous parallel SBI framework that jointly advances simulation sampling and neural network training via a dynamically updated, adaptive dataset, enabling continuous refinement of the simulated distribution. By abandoning fixed-round scheduling, Dynamic SBI supports highly scalable parallel execution and integrates deep neural networks with an adaptive parameter proposal mechanism. We evaluate the method on two high-dimensional astrophysical inference tasks: gravitational-wave background characterization and strong gravitational lensing analysis. Results demonstrate that Dynamic SBI significantly reduces both simulation overhead and training burden while preserving—often improving—posterior inference accuracy.
📝 Abstract
Simulation-based inference (SBI) is emerging as a new statistical paradigm for addressing complex scientific inference problems. By leveraging the representational power of deep neural networks, SBI can extract the most informative simulation features for the parameters of interest. Sequential SBI methods extend this approach by iteratively steering the simulation process towards the most relevant regions of parameter space. This is typically implemented through an algorithmic structure, in which simulation and network training alternate over multiple rounds. This strategy is particularly well suited for high-precision inference in high-dimensional settings, which are commonplace in physics applications with growing data volumes and increasing model fidelity. Here, we introduce dynamic SBI, which implements the core ideas of sequential methods in a round-free, asynchronous, and highly parallelisable manner. At its core is an adaptive dataset that is iteratively transformed during inference to resemble the target observation. Simulation and training proceed in parallel: trained networks are used both to filter out simulations incompatible with the data and to propose new, more promising ones. Compared to round-based sequential methods, this asynchronous structure can significantly reduce simulation costs and training overhead. We demonstrate that dynamic SBI achieves significant improvements in simulation and training efficiency while maintaining inference performance. We further validate our framework on two challenging astrophysical inference tasks: characterising the stochastic gravitational wave background and analysing strong gravitational lensing systems. Overall, this work presents a flexible and efficient new paradigm for sequential SBI.