Rates and architectures for learning geometrically non-trivial operators

📅 2025-12-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing operator learning theory applies only to geometrically simple elliptic operators, failing to characterize non-elliptic problems—such as wave propagation, convection, and fluid dynamics—that exhibit singularities propagating along characteristic manifolds. This project establishes, for the first time, a rigorous learning theory for double-fibration transforms (e.g., generalized Radon transforms, geodesic X-ray transforms), proving super-algebraic convergence rates. We propose a geometry-aware, level-set-parameterized cross-attention architecture enabling dimension-agnostic learning of singularity-propagating operators. Our method integrates double-fibration geometric analysis with operator learning theory, achieving stable generalization from extremely few samples and super-algebraically decaying approximation error. These advances break the conventional ellipticity assumption, substantially strengthening both the theoretical foundations and practical applicability of scientific machine learning in complex physical modeling.

Technology Category

Application Category

📝 Abstract
Deep learning methods have proven capable of recovering operators between high-dimensional spaces, such as solution maps of PDEs and similar objects in mathematical physics, from very few training samples. This phenomenon of data-efficiency has been proven for certain classes of elliptic operators with simple geometry, i.e., operators that do not change the domain of the function or propagate singularities. However, scientific machine learning is commonly used for problems that do involve the propagation of singularities in a priori unknown ways, such as waves, advection, and fluid dynamics. In light of this, we expand the learning theory to include double fibration transforms--geometric integral operators that include generalized Radon and geodesic ray transforms. We prove that this class of operators does not suffer from the curse of dimensionality: the error decays superalgebraically, that is, faster than any fixed power of the reciprocal of the number of training samples. Furthermore, we investigate architectures that explicitly encode the geometry of these transforms, demonstrating that an architecture reminiscent of cross-attention based on levelset methods yields a parameterization that is universal, stable, and learns double fibration transforms from very few training examples. Our results contribute to a rapidly-growing line of theoretical work on learning operators for scientific machine learning.
Problem

Research questions and friction points this paper is trying to address.

Expanding learning theory to include geometric integral operators like double fibration transforms
Proving these operators avoid the curse of dimensionality with superalgebraic error decay
Developing architectures that encode geometry for universal, stable learning from few samples
Innovation

Methods, ideas, or system contributions that make the work stand out.

Expanding learning theory to double fibration transforms
Proving superalgebraic error decay without dimensionality curse
Designing geometry-encoding architectures using cross-attention levelset methods
T. Mitchell Roddenberry
T. Mitchell Roddenberry
Department of Electrical and Computer Engineering, Rice University, Houston, TX, USA
L
Leo Tzou
School of Mathematics and Statistics, University of Melbourne, Melbourne, Australia
Ivan Dokmanić
Ivan Dokmanić
Associate Professor, Department of Mathematics and Computer Science, University of Basel
Signal ProcessingMachine LearningInverse Problems
M
Maarten V. de Hoop
Department of Computational Mathematics and Operations Research, Rice University, Houston, TX, USA
R
Richard G. Baraniuk
Department of Electrical and Computer Engineering, Rice University, Houston, TX, USA