🤖 AI Summary
This work proposes the first gauge-equivariant diffusion generative model to address the challenges of efficient modeling and generalization in non-Abelian lattice gauge theories. By integrating lattice gauge-equivariant convolutional neural networks (L-CNNs)—which respect both local and global gauge symmetries—with a Metropolis-adjusted annealed Langevin algorithm (MAALA), the method achieves high-fidelity generalization to larger lattice volumes and weaker coupling regimes from a single training dataset. Evaluated on two-dimensional U(2) and SU(2) gauge theories, the model accurately predicts various Wilson loops and topological susceptibilities while maintaining high sampling acceptance rates and low estimation errors across both strong and weak coupling regimes, as well as under lattice-size extrapolation, significantly outperforming conventional Monte Carlo approaches.
📝 Abstract
We demonstrate that gauge equivariant diffusion models can accurately model the physics of non-Abelian lattice gauge theory using the Metropolis-adjusted annealed Langevin algorithm (MAALA), as exemplified by computations in two-dimensional U(2) and SU(2) gauge theories. Our network architecture is based on lattice gauge equivariant convolutional neural networks (L-CNNs), which respect local and global symmetries on the lattice. Models are trained on a single ensemble generated using a traditional Monte Carlo method. By studying Wilson loops of various size as well as the topological susceptibility, we find that the diffusion approach generalizes remarkably well to larger inverse couplings and lattice sizes with negligible loss of accuracy while retaining moderately high acceptance rates.