MUDDFormer: Breaking Residual Bottlenecks in Transformers via Multiway Dynamic Dense Connections

📅 2025-02-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limited cross-layer information flow caused by static residual connections in Transformers, this paper proposes the Multi-path Unified Dynamic Dense (MUDD) connection mechanism. MUDD dynamically generates sparse, position-aware connection weights—separately for Q/K/V projections and residual streams—enabling fine-grained, low-overhead positional information reuse and departing from conventional shared-weight paradigms. The method employs a lightweight dynamic weight module compatible with both JAX and PyTorch, introducing only a 0.23% parameter overhead and enabling plug-and-play integration into arbitrary Transformer architectures. Experiments demonstrate that MUDD-enhanced Pythia-2.8B matches the pretraining perplexity and downstream task performance of Pythia-6.9B, achieves five-shot learning accuracy comparable to Pythia-12B, and delivers 1.8–2.4× computational efficiency gains.

Technology Category

Application Category

📝 Abstract
We propose MUltiway Dynamic Dense (MUDD) connections, a simple yet effective method to address the limitations of residual connections and enhance cross-layer information flow in Transformers. Unlike existing dense connection approaches with static and shared connection weights, MUDD generates connection weights dynamically depending on hidden states at each sequence position and for each decoupled input stream (the query, key, value or residual) of a Transformer block. MUDD connections can be seamlessly integrated into any Transformer architecture to create MUDDFormer. Extensive experiments show that MUDDFormer significantly outperforms Transformers across various model architectures and scales in language modeling, achieving the performance of Transformers trained with 1.8X-2.4X compute. Notably, MUDDPythia-2.8B matches Pythia-6.9B in pretraining ppl and downstream tasks and even rivals Pythia-12B in five-shot settings, while adding only 0.23% parameters and 0.4% computation. Code in JAX and PyTorch and pre-trained models are available at https://github.com/Caiyun-AI/MUDDFormer .
Problem

Research questions and friction points this paper is trying to address.

Enhance cross-layer information flow
Dynamic connection weights generation
Improve Transformer model efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic connection weights
Enhanced cross-layer flow
Seamless Transformer integration
🔎 Similar Papers
No similar papers found.
D
Da Xiao
Beijing University of Posts and Telecommunications, Beijing, China
Qingye Meng
Qingye Meng
NLP Algorithm Engineer
architecture of LLMsmechanistic interpretability
S
Shengping Li
ColorfulClouds Technology Co., Ltd., Beijing, China
X
Xingyuan Yuan
ColorfulClouds Technology Co., Ltd., Beijing, China