๐ค AI Summary
In scientific machine learning, models often neglect physical symmetries due to training difficulty or implementation complexity, yielding predictions that violate fundamental physical laws. To address this, we propose Test-time Group Averaging (TGA), a zero-cost inference-stage correction method that requires no architectural modification or additional training overhead. During inference, TGA applies all symmetry transformations to the input, performs forward passes for each, and averages the outputsโthereby rigorously enforcing the target symmetry in predictions. We provide theoretical guarantees that TGA improves accuracy under common regularity conditions. Experiments across multiple PDE modeling paradigms demonstrate up to 37% reduction in VRMSE, consistent decreases in evaluation loss, improved adherence to physical constraints, and markedly enhanced visual quality of continuous dynamical trajectories. The core contribution is a training-free, general-purpose technique for exact symmetry enforcement on any pre-trained model.
๐ Abstract
Many machine learning tasks in the natural sciences are precisely equivariant to particular symmetries. Nonetheless, equivariant methods are often not employed, perhaps because training is perceived to be challenging, or the symmetry is expected to be learned, or equivariant implementations are seen as hard to build. Group averaging is an available technique for these situations. It happens at test time; it can make any trained model precisely equivariant at a (often small) cost proportional to the size of the group; it places no requirements on model structure or training. It is known that, under mild conditions, the group-averaged model will have a provably better prediction accuracy than the original model. Here we show that an inexpensive group averaging can improve accuracy in practice. We take well-established benchmark machine learning models of differential equations in which certain symmetries ought to be obeyed. At evaluation time, we average the models over a small group of symmetries. Our experiments show that this procedure always decreases the average evaluation loss, with improvements of up to 37% in terms of the VRMSE. The averaging produces visually better predictions for continuous dynamics. This short paper shows that, under certain common circumstances, there are no disadvantages to imposing exact symmetries; the ML4PS community should consider group averaging as a cheap and simple way to improve model accuracy.