🤖 AI Summary
In physics-informed neural networks (PINNs) applied to nonequilibrium fluctuating systems, conventional penalty term design relies on heuristics and fails to faithfully capture the true fluctuation structure. Method: This work proposes a thermodynamically consistent neural network framework by incorporating the large deviation principle (LDP) into the PINN architecture for the first time. Leveraging the system-level large deviation rate function as a physically grounded basis, we construct a loss function that explicitly suppresses improbable fluctuation trajectories, thereby enforcing physical consistency in regularizing nonequilibrium fluctuations. Contribution/Results: We derive a posteriori error bounds grounded in the rate function. Experiments across multiple nonequilibrium fluctuating systems demonstrate that our method significantly outperforms traditional residual-weighting strategies, yielding improved adherence to physical laws, enhanced generalization, and greater robustness.
📝 Abstract
Physics-Informed Neural Networks (PINNs) are a class of deep learning models aiming to approximate solutions of PDEs by training neural networks to minimize the residual of the equation. Focusing on non-equilibrium fluctuating systems, we propose a physically informed choice of penalization that is consistent with the underlying fluctuation structure, as characterized by a large deviations principle. This approach yields a novel formulation of PINNs in which the penalty term is chosen to penalize improbable deviations, rather than being selected heuristically. The resulting thermodynamically consistent extension of PINNs, termed THINNs, is subsequently analyzed by establishing analytical a posteriori estimates, and providing empirical comparisons to established penalization strategies.