π€ AI Summary
This work investigates the universal approximation capability of neural stochastic differential equations (NSDEs) for general ItΓ΄-type diffusion SDEs. For continuous coefficient functions satisfying a global linear growth condition, we establish that NSDEs achieve local uniform approximation in path space. Our method integrates stochastic analysis (via ItΓ΄βs formula), functional approximation theory, and parameterized neural network design; leveraging compactness arguments in path space, we derive quantitative error bounds and explicit convergence rates for SDEs with regular coefficients. This work presents the first universal approximation theory linking multiple neural network architectures to NSDEs. By transcending conventional deterministic neural network approximation frameworks, our results provide a rigorous theoretical foundation and computationally tractable guarantees for modeling stochastic dynamical systems, pricing financial derivatives, and simulating complex physical processes.
π Abstract
We identify various classes of neural networks that are able to approximate continuous functions locally uniformly subject to fixed global linear growth constraints. For such neural networks the associated neural stochastic differential equations can approximate general stochastic differential equations, both of It^o diffusion type, arbitrarily well. Moreover, quantitative error estimates are derived for stochastic differential equations with sufficiently regular coefficients.