Simultaneously Solving FBSDEs with Neural Operators of Logarithmic Depth, Constant Width, and Sub-Linear Rank

📅 2024-10-18
🏛️ arXiv.org
📈 Citations: 1
Influential: 1
📄 PDF
🤖 AI Summary
Existing FBSDE solvers rely on sequential, equation-by-equation resolution, rendering them inefficient for large-scale families of forward-backward stochastic differential equations (FBSDEs). While neural operators (NOs) offer theoretical generalization potential, their generic universal approximation property leads to prohibitively large architectures—rendering them computationally infeasible. Method: We propose a convolutional neural operator framework integrating Green’s function encoding, Sobolev-space analysis, stochastic terminal-time modeling, and sublinear-rank low-dimensional approximation. Contribution/Results: We establish the first constructive existence proof: for structured FBSDE families, there exists a compact neural operator with depth $O(log(1/varepsilon))$, width $O(1)$, and rank $O(varepsilon^{-r})$ ($r<1$) that uniformly approximates the solution operator to arbitrary accuracy $varepsilon > 0$. Crucially, we show that channel-wise lifting exponentially suppresses rank growth. Our framework achieves uniform $varepsilon$-approximation on appropriate compact sets, yielding the first scalable, complexity-guaranteed theory and architecture for joint FBSDE and elliptic PDE solving.

Technology Category

Application Category

📝 Abstract
Forward-backwards stochastic differential equations (FBSDEs) are central in optimal control, game theory, economics, and mathematical finance. Unfortunately, the available FBSDE solvers operate on extit{individual} FBSDEs, meaning that they cannot provide a computationally feasible strategy for solving large families of FBSDEs as these solvers must be re-run several times. extit{Neural operators} (NOs) offer an alternative approach for extit{simultaneously solving} large families of FBSDEs by directly approximating the solution operator mapping extit{inputs:} terminal conditions and dynamics of the backwards process to extit{outputs:} solutions to the associated FBSDE. Though universal approximation theorems (UATs) guarantee the existence of such NOs, these NOs are unrealistically large. We confirm that ``small'' NOs can uniformly approximate the solution operator to structured families of FBSDEs with random terminal time, uniformly on suitable compact sets determined by Sobolev norms, to any prescribed error $varepsilon>0$ using a depth of $mathcal{O}(log(1/varepsilon))$, a width of $mathcal{O}(1)$, and a sub-linear rank; i.e. $mathcal{O}(1/varepsilon^r)$ for some $r<1$. This result is rooted in our second main contribution, which shows that convolutional NOs of similar depth, width, and rank can approximate the solution operator to a broad class of Elliptic PDEs. A key insight here is that the convolutional layers of our NO can efficiently encode the Green's function associated to the Elliptic PDEs linked to our FBSDEs. A byproduct of our analysis is the first theoretical justification for the benefit of lifting channels in NOs: they exponentially decelerate the growth rate of the NO's rank.
Problem

Research questions and friction points this paper is trying to address.

Simultaneously solving large families of FBSDEs with neural operators
Approximating solution operators for semilinear elliptic PDEs efficiently
Reducing neural operator size while maintaining approximation accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Small neural operators solve FBSDEs families
Convolutional NOs approximate solution operators
Fixed point iteration mimics NO layers
🔎 Similar Papers
No similar papers found.