🤖 AI Summary
This work addresses the instability of Transformer training under high learning rates, which often leads to divergence. To mitigate this issue, the authors introduce a consensus mechanism into the Transformer architecture for the first time, proposing a plug-and-play graph-based consensus module that can either replace or be integrated alongside standard attention layers. The approach significantly widens the effective learning rate range and substantially improves training stability across diverse modalities—including text, DNA, and protein sequences—while preserving the original model performance in hybrid configurations. Both theoretical analysis and extensive experiments corroborate the effectiveness and broad applicability of the proposed method.
📝 Abstract
Standard attention-based transformers are known to exhibit instability under learning rate overspecification during training, particularly at high learning rates. While various methods have been proposed to improve resilience to such overspecification by modifying the optimization procedure, fundamental architectural innovations to this end remain underexplored. In this work, we illustrate that the consensus mechanism, a drop-in replacement for attention, stabilizes transformer training across a wider effective range of learning rates. We formulate consensus as a graphical model and provide extensive empirical analysis demonstrating improved stability across learning rate sweeps on text, DNA, and protein modalities. We further propose a hybrid consensus-attention framework that preserves performance while improving stability. We provide theoretical analysis characterizing the properties of consensus.