SVD-NO: Learning PDE Solution Operators with SVD Integral Kernels

📅 2025-11-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing neural operators impose strong structural assumptions—such as Fourier or graph priors—on the kernel integral operator, limiting expressivity and generalization. To address this, we propose SVD-NO: the first neural operator that explicitly incorporates the singular value decomposition (SVD) of the integral kernel into its architecture. It employs lightweight subnetworks to parameterize the left and right singular functions independently, learns singular values separately, and enforces orthogonality of the basis functions via a Gram matrix regularization term. This design enables interpretable, highly expressive, and computationally efficient operator learning in a low-rank subspace. Evaluated on five canonical PDE benchmarks, SVD-NO achieves state-of-the-art performance—particularly excelling on equations with strong spatial heterogeneity in their solutions. The method combines theoretical rigor with practical deployability, offering both principled foundation and empirical effectiveness.

Technology Category

Application Category

📝 Abstract
Neural operators have emerged as a promising paradigm for learning solution operators of partial differential equa- tions (PDEs) directly from data. Existing methods, such as those based on Fourier or graph techniques, make strong as- sumptions about the structure of the kernel integral opera- tor, assumptions which may limit expressivity. We present SVD-NO, a neural operator that explicitly parameterizes the kernel by its singular-value decomposition (SVD) and then carries out the integral directly in the low-rank basis. Two lightweight networks learn the left and right singular func- tions, a diagonal parameter matrix learns the singular values, and a Gram-matrix regularizer enforces orthonormality. As SVD-NO approximates the full kernel, it obtains a high de- gree of expressivity. Furthermore, due to its low-rank struc- ture the computational complexity of applying the operator remains reasonable, leading to a practical system. In exten- sive evaluations on five diverse benchmark equations, SVD- NO achieves a new state of the art. In particular, SVD-NO provides greater performance gains on PDEs whose solutions are highly spatially variable. The code of this work is publicly available at https://github.com/2noamk/SVDNO.git.
Problem

Research questions and friction points this paper is trying to address.

Learning PDE solution operators with neural networks using SVD kernels
Overcoming expressivity limitations of Fourier and graph-based neural operators
Achieving high accuracy on spatially variable PDE solutions efficiently
Innovation

Methods, ideas, or system contributions that make the work stand out.

Parameterizes kernel via SVD decomposition
Learns singular functions with lightweight networks
Uses low-rank structure for computational efficiency
🔎 Similar Papers
No similar papers found.
N
Noam Koren
Department of Computer Science, Technion – Israel Institute of Technology, Haifa, Israel
R
Ralf J. J. Mackenbach
Swiss Plasma Center, EPFL, Switzerland
R
R. V. Sloun
Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
Kira Radinsky
Kira Radinsky
Technion
Daniel Freedman
Daniel Freedman
Department of Applied Mathematics, Tel Aviv University, Tel Aviv, Israel