π€ AI Summary
This work addresses the limited generality and theoretical guarantees of operator learning for probabilistic models. Methodologically: (i) it constructs a stochastic process operator learning architecture satisfying global Lipschitz continuity and general tail probability constraints; (ii) it pioneers the extension of neural operators to forward-backward stochastic differential equation (FBSDE) systems with free boundaries, establishing a universal approximation theorem with explicit network size bounds; (iii) it achieves cross-parameter generalization for optimal stopping boundary prediction. Theoretically, the work extends the applicability of neural operators to nonlinear free-boundary problemsβa previously unaddressed challenge. Practically, it successfully learns stopping boundaries for basket American option pricing, demonstrating zero-shot generalization to unseen strike prices, while achieving high accuracy, strong robustness, and engineering viability.
π Abstract
We propose a deep neural-operator framework for a general class of probability models. Under global Lipschitz conditions on the operator over the entire Euclidean space-and for a broad class of probabilistic models-we establish a universal approximation theorem with explicit network-size bounds for the proposed architecture. The underlying stochastic processes are required only to satisfy integrability and general tail-probability conditions. We verify these assumptions for both European and American option-pricing problems within the forward-backward SDE (FBSDE) framework, which in turn covers a broad class of operators arising from parabolic PDEs, with or without free boundaries. Finally, we present a numerical example for a basket of American options, demonstrating that the learned model produces optimal stopping boundaries for new strike prices without retraining.