Physics-informed learning under mixing: How physical knowledge speeds up learning

📅 2025-09-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In physics-informed machine learning, conventional Sobolev minimax rates are constrained by data dependence structures, limiting achievable learning rates under non-i.i.d. settings. Method: We propose a physics-regularized empirical risk minimization framework that explicitly incorporates accurate physical priors into the loss function, enabling improved learning rates under dependent data. Contribution/Results: We theoretically establish that when the embedded physical model exactly matches the underlying governing law, the excess risk converges at the optimal i.i.d. rate—accelerating beyond standard Sobolev rates—without compromising sample efficiency. This is the first result to quantify the convergence-rate acceleration induced by physical priors in non-i.i.d. regimes. Our analysis provides a novel generalization-theoretic paradigm for physics-informed learning, bridging physical modeling fidelity with statistical learning guarantees under data dependence.

Technology Category

Application Category

📝 Abstract
A major challenge in physics-informed machine learning is to understand how the incorporation of prior domain knowledge affects learning rates when data are dependent. Focusing on empirical risk minimization with physics-informed regularization, we derive complexity-dependent bounds on the excess risk in probability and in expectation. We prove that, when the physical prior information is aligned, the learning rate improves from the (slow) Sobolev minimax rate to the (fast) optimal i.i.d. one without any sample-size deflation due to data dependence.
Problem

Research questions and friction points this paper is trying to address.

Physics-informed machine learning with dependent data
Improving learning rates through physical knowledge
Achieving optimal i.i.d. rates with aligned priors
Innovation

Methods, ideas, or system contributions that make the work stand out.

Physics-informed regularization accelerates learning rates
Aligned physical priors enable optimal i.i.d. convergence
Overcomes slow Sobolev rates under dependent data conditions
🔎 Similar Papers
No similar papers found.