🤖 AI Summary
In physics-informed machine learning, conventional Sobolev minimax rates are constrained by data dependence structures, limiting achievable learning rates under non-i.i.d. settings.
Method: We propose a physics-regularized empirical risk minimization framework that explicitly incorporates accurate physical priors into the loss function, enabling improved learning rates under dependent data.
Contribution/Results: We theoretically establish that when the embedded physical model exactly matches the underlying governing law, the excess risk converges at the optimal i.i.d. rate—accelerating beyond standard Sobolev rates—without compromising sample efficiency. This is the first result to quantify the convergence-rate acceleration induced by physical priors in non-i.i.d. regimes. Our analysis provides a novel generalization-theoretic paradigm for physics-informed learning, bridging physical modeling fidelity with statistical learning guarantees under data dependence.
📝 Abstract
A major challenge in physics-informed machine learning is to understand how the incorporation of prior domain knowledge affects learning rates when data are dependent. Focusing on empirical risk minimization with physics-informed regularization, we derive complexity-dependent bounds on the excess risk in probability and in expectation. We prove that, when the physical prior information is aligned, the learning rate improves from the (slow) Sobolev minimax rate to the (fast) optimal i.i.d. one without any sample-size deflation due to data dependence.