🤖 AI Summary
This paper addresses the universal approximation of continuous (including nonlinear) operators on Banach spaces. Methodologically, it introduces a novel learning framework based on orthogonal polynomial projections—marking the first integration of Leray–Schauder mapping theory into operator approximation theorems, synergizing Banach-space operator analysis with spectral approximation techniques in $L^p$ spaces. Specifically, in $L^p$ (notably $L^2$), it establishes a two-stage operator learning paradigm: “learnable projection” followed by “finite-dimensional mapping.” Theoretical contributions include: (1) a proof of universal approximation capability for the framework on arbitrary Banach spaces; (2) explicit sufficient conditions ensuring high-precision operator approximation in $L^2$; and (3) the first rigorous, unified mathematical foundation for operator neural networks.
📝 Abstract
We obtain a new universal approximation theorem for continuous (possibly nonlinear) operators on arbitrary Banach spaces using the Leray-Schauder mapping. Moreover, we introduce and study a method for operator learning in Banach spaces $L^p$ of functions with multiple variables, based on orthogonal projections on polynomial bases. We derive a universal approximation result for operators where we learn a linear projection and a finite dimensional mapping under some additional assumptions. For the case of $p=2$, we give some sufficient conditions for the approximation results to hold. This article serves as the theoretical framework for a deep learning methodology in operator learning.