🤖 AI Summary
This work investigates how neural networks internalize physical symmetries when learning solution operators for partial differential equations and achieve symmetry-preserving generalization. To this end, the authors propose a novel metric termed *orbit gradient consistency*, which quantifies whether training dynamics couple physically equivalent configurations by analyzing influence functions, measuring gradient overlap over group orbits, and examining the local geometry of the loss landscape. Unlike conventional forward-equivariance tests, this approach directly reveals whether the model converges to a loss basin compatible with the underlying symmetries. Experiments on autoregressive fluid flow simulators demonstrate that orbit gradient consistency effectively gauges the degree to which models internalize known symmetries, offering an interpretable and quantitative framework for evaluating symmetry generalization.
📝 Abstract
We study how neural emulators of partial differential equation solution operators internalize physical symmetries by introducing an influence-based diagnostic that measures the propagation of parameter updates between symmetry-related states, defined as the metric-weighted overlap of loss gradients evaluated along group orbits. This quantity probes the local geometry of the learned loss landscape and goes beyond forward-pass equivariance tests by directly assessing whether learning dynamics couple physically equivalent configurations. Applying our diagnostic to autoregressive fluid flow emulators, we show that orbit-wise gradient coherence provides the mechanism for learning to generalize over symmetry transformations and indicates when training selects a symmetry compatible basin. The result is a novel technique for evaluating if surrogate models have internalized symmetry properties of the known solution operator.