Interpretability and Generalization Bounds for Learning Spatial Physics

📅 2025-06-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the reliability of physical equation modeling in scientific machine learning, using the one-dimensional Poisson equation as a canonical benchmark to systematically quantify how discretization fidelity and functional space constraints affect model generalization. Method: We integrate numerical analysis theory into ML generalization analysis for the first time, combining training dynamics modeling, rigorous derivation of generalization bounds, and analytical extraction of Green’s function-based weights. We further propose a Green’s function–driven interpretability paradigm for black-box models and design a physics-aware cross-validation benchmark. Contributions/Results: Empirical evaluation spans differential equation discovery methods, linear models, DeepONets, and neural operators. We rigorously establish finite-data generalization bounds and convergence rates, proving that mainstream physics-informed ML models cannot guarantee convergence to the true governing equation. Our work establishes the Poisson equation as a universal evaluation benchmark and provides both theoretical foundations and practical tools for trustworthy scientific AI.

Technology Category

Application Category

📝 Abstract
While there are many applications of ML to scientific problems that look promising, visuals can be deceiving. For scientific applications, actual quantitative accuracy is crucial. This work applies the rigor of numerical analysis for differential equations to machine learning by specifically quantifying the accuracy of applying different ML techniques to the elementary 1D Poisson differential equation. Beyond the quantity and discretization of data, we identify that the function space of the data is critical to the generalization of the model. We prove generalization bounds and convergence rates under finite data discretizations and restricted training data subspaces by analyzing the training dynamics and deriving optimal parameters for both a white-box differential equation discovery method and a black-box linear model. The analytically derived generalization bounds are replicated empirically. Similar lack of generalization is empirically demonstrated for deep linear models, shallow neural networks, and physics-specific DeepONets and Neural Operators. We theoretically and empirically demonstrate that generalization to the true physical equation is not guaranteed in each explored case. Surprisingly, we find that different classes of models can exhibit opposing generalization behaviors. Based on our theoretical analysis, we also demonstrate a new mechanistic interpretability lens on scientific models whereby Green's function representations can be extracted from the weights of black-box models. Our results inform a new cross-validation technique for measuring generalization in physical systems. We propose applying it to the Poisson equation as an evaluation benchmark of future methods.
Problem

Research questions and friction points this paper is trying to address.

Quantify accuracy of ML techniques for 1D Poisson equation
Identify function space impact on model generalization
Demonstrate lack of generalization in various ML models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Quantifies ML accuracy for 1D Poisson equation
Proves generalization bounds under finite data
Extracts Green's function from black-box models
🔎 Similar Papers
No similar papers found.
A
A. Queiruga
Google
T
Theo Gutman-Solo
Google
Shuai Jiang
Shuai Jiang
Google
power electronics