EPAS: Efficient Training with Progressive Activation Sharing

📅 2026-01-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the computational inefficiency in Transformer training and inference caused by redundant activations in query-key (QK) and key-value (KV) computations. To mitigate this, the authors propose a progressive cross-layer activation sharing mechanism that uniquely integrates progressive training with dynamically expanding shared regions, gradually extending the sharing scope from deeper to shallower layers. The approach supports flexible adaptation to varying computational budgets. Experiments on LLaMA models (125M–7B) demonstrate that the method achieves an 11.1% improvement in training throughput and a 29% gain in inference throughput while maintaining loss curves comparable to the baseline. Furthermore, it yields a 10% average accuracy improvement during continued pretraining, effectively balancing efficiency and performance.

Technology Category

Application Category

📝 Abstract
We present a novel method for Efficient training with Progressive Activation Sharing (EPAS). This method bridges progressive training paradigm with the phenomenon of redundant QK (or KV ) activations across deeper layers of transformers. EPAS gradually grows a sharing region during training by switching decoder layers to activation sharing mode. This results in throughput increase due to reduced compute. To utilize deeper layer redundancy, the sharing region starts from the deep end of the model and grows towards the shallow end. The EPAS trained models allow for variable region lengths of activation sharing for different compute budgets during inference. Empirical evaluations with QK activation sharing in LLaMA models ranging from 125M to 7B parameters show up to an 11.1% improvement in training throughput and up to a 29% improvement in inference throughput while maintaining similar loss curve to the baseline models. Furthermore, applying EPAS in continual pretraining to transform TinyLLaMA into an attention-sharing model yields up to a 10% improvement in average accuracy over state-of-the-art methods, emphasizing the significance of progressive training in cross layer activation sharing models.
Problem

Research questions and friction points this paper is trying to address.

activation redundancy
efficient training
transformer models
throughput optimization
QK activation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Progressive Activation Sharing
Transformer Efficiency
QK Activation Redundancy
Training Throughput
Inference Acceleration
🔎 Similar Papers
No similar papers found.
R
Rezaul Karim
Ascend Team, Huawei Technologies, Toronto, Canada
M
Maryam Dialameh
Ascend Team, Huawei Technologies, Toronto, Canada; Department of Mechanical and Mechatronics Engineering, University of Waterloo, Canada
Y
Yang Liu
Ascend Team, Huawei Technologies, Toronto, Canada
Boxing Chen
Boxing Chen
Huawei Technologies Canada
Natual Language ProcessingArtificial Intelligence
Walid Ahmed
Walid Ahmed
Huawei Technologies Canada
Deep LearningMachine LearningSoft Computing