🤖 AI Summary
This work addresses the computational bottleneck in efficiently simulating parameter-dependent stochastic differential equations (SDEs), where high-fidelity simulations must typically be performed independently for each parameter value. Existing approaches either incur substantial training costs or struggle with continuous parameter variations. To overcome these limitations, the authors propose a training-free conditional diffusion model framework that directly estimates the conditional score function from trajectory data, enabling rapid generation of SDE sample paths for arbitrary continuous parameters. The key innovation lies in a joint kernel-weighted Monte Carlo estimator that constructs a conditional score approximation from discrete parameter samples, allowing interpolation across both state space and parameter domain without neural network training. Experiments on three increasingly complex systems demonstrate accurate approximation of conditional distributions across parameters, significantly accelerating tasks such as parametric studies, uncertainty quantification, and real-time filtering.
📝 Abstract
Simulating parameter-dependent stochastic differential equations (SDEs) presents significant computational challenges, as separate high-fidelity simulations are typically required for each parameter value of interest. Despite the success of machine learning methods in learning SDE dynamics, existing approaches either require expensive neural network training for score function estimation or lack the ability to handle continuous parameter dependence. We present a training-free conditional diffusion model framework for learning stochastic flow maps of parameter-dependent SDEs, where both drift and diffusion coefficients depend on physical parameters. The key technical innovation is a joint kernel-weighted Monte Carlo estimator that approximates the conditional score function using trajectory data sampled at discrete parameter values, enabling interpolation across both state space and the continuous parameter domain. Once trained, the resulting generative model produces sample trajectories for any parameter value within the training range without retraining, significantly accelerating parameter studies, uncertainty quantification, and real-time filtering applications. The performance of the proposed approach is demonstrated via three numerical examples of increasing complexity, showing accurate approximation of conditional distributions across varying parameter values.