Stabilizing Transformer Training Through Consensus

📅 2026-01-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the instability of Transformer training under high learning rates, which often leads to divergence. To mitigate this issue, the authors introduce a consensus mechanism into the Transformer architecture for the first time, proposing a plug-and-play graph-based consensus module that can either replace or be integrated alongside standard attention layers. The approach significantly widens the effective learning rate range and substantially improves training stability across diverse modalities—including text, DNA, and protein sequences—while preserving the original model performance in hybrid configurations. Both theoretical analysis and extensive experiments corroborate the effectiveness and broad applicability of the proposed method.

Technology Category

Application Category

📝 Abstract
Standard attention-based transformers are known to exhibit instability under learning rate overspecification during training, particularly at high learning rates. While various methods have been proposed to improve resilience to such overspecification by modifying the optimization procedure, fundamental architectural innovations to this end remain underexplored. In this work, we illustrate that the consensus mechanism, a drop-in replacement for attention, stabilizes transformer training across a wider effective range of learning rates. We formulate consensus as a graphical model and provide extensive empirical analysis demonstrating improved stability across learning rate sweeps on text, DNA, and protein modalities. We further propose a hybrid consensus-attention framework that preserves performance while improving stability. We provide theoretical analysis characterizing the properties of consensus.
Problem

Research questions and friction points this paper is trying to address.

Transformer
training instability
learning rate overspecification
attention mechanism
consensus
Innovation

Methods, ideas, or system contributions that make the work stand out.

consensus mechanism
transformer stability
learning rate robustness
hybrid attention-consensus
graphical model
🔎 Similar Papers
No similar papers found.
S
Shyam Venkatasubramanian
Anthrogen PBC, San Francisco, CA, USA
Sean Moushegian
Sean Moushegian
Duke University
Deep LearningInformation TheoryDiffusion-based Methods
Michael Lin
Michael Lin
Glaucoma Specialist, Massachusetts Eye and Ear
Glaucoma
M
Mir Park
Anthrogen PBC, San Francisco, CA, USA
A
Ankit Singhal
Anthrogen PBC, San Francisco, CA, USA
Connor Lee
Connor Lee
California Institute of Technology
RoboticsComputer VisionMachine Learning