Deep Neural Operator Learning for Probabilistic Models

πŸ“… 2025-11-10
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the limited generality and theoretical guarantees of operator learning for probabilistic models. Methodologically: (i) it constructs a stochastic process operator learning architecture satisfying global Lipschitz continuity and general tail probability constraints; (ii) it pioneers the extension of neural operators to forward-backward stochastic differential equation (FBSDE) systems with free boundaries, establishing a universal approximation theorem with explicit network size bounds; (iii) it achieves cross-parameter generalization for optimal stopping boundary prediction. Theoretically, the work extends the applicability of neural operators to nonlinear free-boundary problemsβ€”a previously unaddressed challenge. Practically, it successfully learns stopping boundaries for basket American option pricing, demonstrating zero-shot generalization to unseen strike prices, while achieving high accuracy, strong robustness, and engineering viability.

Technology Category

Application Category

πŸ“ Abstract
We propose a deep neural-operator framework for a general class of probability models. Under global Lipschitz conditions on the operator over the entire Euclidean space-and for a broad class of probabilistic models-we establish a universal approximation theorem with explicit network-size bounds for the proposed architecture. The underlying stochastic processes are required only to satisfy integrability and general tail-probability conditions. We verify these assumptions for both European and American option-pricing problems within the forward-backward SDE (FBSDE) framework, which in turn covers a broad class of operators arising from parabolic PDEs, with or without free boundaries. Finally, we present a numerical example for a basket of American options, demonstrating that the learned model produces optimal stopping boundaries for new strike prices without retraining.
Problem

Research questions and friction points this paper is trying to address.

Learning neural operators for probabilistic models with approximation guarantees
Establishing universal approximation theorems under Lipschitz conditions
Solving American option pricing via FBSDE framework without retraining
Innovation

Methods, ideas, or system contributions that make the work stand out.

Deep neural operator framework for probability models
Universal approximation theorem with network-size bounds
Learned model generalizes to new strike prices without retraining
πŸ”Ž Similar Papers
No similar papers found.