🤖 AI Summary
This work addresses the limitations of strict equivariance in graph generation models, which, while preserving symmetry, incurs high computational costs, slow convergence, and overfitting. To mitigate these issues, the authors propose a controllable symmetry modulation mechanism that dynamically relaxes symmetry constraints during training by integrating sinusoidal positional encodings, node permutations, and a discrete flow matching framework. This approach accelerates early-stage convergence, effectively suppresses shortcut learning and repetitive generation, and significantly delays overfitting. Experimental results demonstrate that the method achieves superior performance using only 19% of the baseline’s training epochs, substantially improving both training efficiency and generation diversity.
📝 Abstract
Equivariance is central to graph generative models, as it ensures the model respects the permutation symmetry of graphs. However, strict equivariance can increase computational cost due to added architectural constraints, and can slow down convergence because the model must be consistent across a large space of possible node permutations. We study this trade-off for graph generative models. Specifically, we start from an equivariant discrete flow-matching model, and relax its equivariance during training via a controllable symmetry modulation scheme based on sinusoidal positional encodings and node permutations. Experiments first show that symmetry-breaking can accelerate early training by providing an easier learning signal, but at the expense of encouraging shortcut solutions that can cause overfitting, where the model repeatedly generates graphs that are duplicates of the training set. On the contrary, properly modulating the symmetry signal can delay overfitting while accelerating convergence, allowing the model to reach stronger performance with $19\%$ of the baseline training epochs.