🤖 AI Summary
This work addresses the computational inefficiency in Transformer training and inference caused by redundant activations in query-key (QK) and key-value (KV) computations. To mitigate this, the authors propose a progressive cross-layer activation sharing mechanism that uniquely integrates progressive training with dynamically expanding shared regions, gradually extending the sharing scope from deeper to shallower layers. The approach supports flexible adaptation to varying computational budgets. Experiments on LLaMA models (125M–7B) demonstrate that the method achieves an 11.1% improvement in training throughput and a 29% gain in inference throughput while maintaining loss curves comparable to the baseline. Furthermore, it yields a 10% average accuracy improvement during continued pretraining, effectively balancing efficiency and performance.
📝 Abstract
We present a novel method for Efficient training with Progressive Activation Sharing (EPAS). This method bridges progressive training paradigm with the phenomenon of redundant QK (or KV ) activations across deeper layers of transformers. EPAS gradually grows a sharing region during training by switching decoder layers to activation sharing mode. This results in throughput increase due to reduced compute. To utilize deeper layer redundancy, the sharing region starts from the deep end of the model and grows towards the shallow end. The EPAS trained models allow for variable region lengths of activation sharing for different compute budgets during inference. Empirical evaluations with QK activation sharing in LLaMA models ranging from 125M to 7B parameters show up to an 11.1% improvement in training throughput and up to a 29% improvement in inference throughput while maintaining similar loss curve to the baseline models. Furthermore, applying EPAS in continual pretraining to transform TinyLLaMA into an attention-sharing model yields up to a 10% improvement in average accuracy over state-of-the-art methods, emphasizing the significance of progressive training in cross layer activation sharing models.