🤖 AI Summary
High-dimensional semilinear parabolic PDEs—where the nonlinearity is gradient-independent and Lipschitz continuous—suffer from the “curse of dimensionality” in conventional numerical methods.
Method: This paper proposes an Lᵖ-approximation scheme combining multilevel Picard iteration with deep neural networks employing ReLU, leaky ReLU, or softplus activation functions.
Contribution/Results: We provide the first rigorous Lᵖ-error analysis proving that, for a prescribed accuracy ε > 0, both the computational complexity and the number of network parameters scale polynomially in the dimension d and ε⁻¹—thereby fully overcoming the curse of dimensionality. Unlike classical methods, our approach applies to arbitrary dimension d and delivers stronger theoretical guarantees: an Lᵖ approximation error ≤ ε is achievable with controllable polynomial cost, without requiring prior knowledge of the solution’s gradients or assumptions on its regularity.
📝 Abstract
We prove that multilevel Picard approximations and deep neural networks with ReLU, leaky ReLU, and softplus activation are capable of approximating solutions of semilinear Kolmogorov PDEs in $L^mathfrak{p}$-sense, $mathfrak{p}in [2,infty)$, in the case of gradient-independent, Lipschitz-continuous nonlinearities, while the computational effort of the multilevel Picard approximations and the required number of parameters in the neural networks grow at most polynomially in both dimension $din mathbb{N}$ and reciprocal of the prescribed accuracy $epsilon$.