Benign overfitting in Fixed Dimension via Physics-Informed Learning with Smooth Inductive Bias

📅 2024-06-13
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the generalization mechanisms of linear inverse problems—such as elliptic PDE-constrained inverse problems—in fixed-dimensional settings under physical constraints. To characterize benign overfitting, we propose a smooth inductive bias modeling framework where Sobolev norms serve as implicit regularizers. Integrating kernel ridge regression with asymptotic learning theory, we provide the first rigorous proof that PDE operators enable stable variance estimation and induce benign overfitting. Our theoretical analysis reveals that, provided the Sobolev prior’s smoothness is sufficiently high, Sobolev norms of arbitrary orders all yield optimal convergence rates—a result unifying Bayesian estimators and minimum-norm interpolators. This constitutes the first fixed-dimensional theoretical guarantee for the generalization capability of physics-informed neural networks and related models.

Technology Category

Application Category

📝 Abstract
Recent advances in machine learning have inspired a surge of research into reconstructing specific quantities of interest from measurements that comply with certain physical laws. These efforts focus on inverse problems that are governed by partial differential equations (PDEs). In this work, we develop an asymptotic Sobolev norm learning curve for kernel ridge(less) regression when addressing (elliptical) linear inverse problems. Our results show that the PDE operators in the inverse problem can stabilize the variance and even behave benign overfitting for fixed-dimensional problems, exhibiting different behaviors from regression problems. Besides, our investigation also demonstrates the impact of various inductive biases introduced by minimizing different Sobolev norms as a form of implicit regularization. For the regularized least squares estimator, we find that all considered inductive biases can achieve the optimal convergence rate, provided the regularization parameter is appropriately chosen. The convergence rate is actually independent to the choice of (smooth enough) inductive bias for both ridge and ridgeless regression. Surprisingly, our smoothness requirement recovered the condition found in Bayesian setting and extend the conclusion to the minimum norm interpolation estimators.
Problem

Research questions and friction points this paper is trying to address.

Study benign overfitting in fixed-dimensional PDE-based inverse problems
Analyze Sobolev norm learning curves for kernel regression methods
Explore impact of smooth inductive biases on convergence rates
Innovation

Methods, ideas, or system contributions that make the work stand out.

Physics-informed learning with smooth inductive bias
Asymptotic Sobolev norm learning curve
Optimal convergence via implicit regularization
🔎 Similar Papers
No similar papers found.
H
Honam Wong
Department of Computer Science and Engineering, The Hong Kong University of Science and Technology
W
Wendao Wu
School of Mathematical Sciences, Peking University
Fanghui Liu
Fanghui Liu
Assistant Professor, University of Warwick
Foundations of Modern ML
Y
Yiping Lu
Industrial Engineering & Management Sciences, Northwestern University