🤖 AI Summary
Sparse training of Transformers often disrupts critical multiplicative interactions between attention and feed-forward layer weight matrices, leading to substantial performance degradation at high sparsity levels. To address this, we propose EcoSpa—a computationally efficient structured sparse training method that jointly prunes interacting weight matrix pairs via coupled sparsification and row-column alignment pruning, explicitly preserving structural interdependencies. EcoSpa further introduces a novel, fine-grained importance metric defined over structural components to enhance sparsity robustness. The method relies solely on standard PyTorch operations, requiring no custom hardware or kernel modifications. Evaluated on LLaMA-1B, EcoSpa achieves 50% memory reduction and 21% training speedup; on GPT-2-Medium, it attains a 2.2× compression ratio, reduces perplexity by 2.4, and accelerates inference by 1.6×.
📝 Abstract
Transformers have become the backbone of modern AI, yet their high computational demands pose critical system challenges. While sparse training offers efficiency gains, existing methods fail to preserve critical structural relationships between weight matrices that interact multiplicatively in attention and feed-forward layers. This oversight leads to performance degradation at high sparsity levels. We introduce EcoSpa, an efficient structured sparse training method that jointly evaluates and sparsifies coupled weight matrix pairs, preserving their interaction patterns through aligned row/column removal. EcoSpa introduces a new granularity for calibrating structural component importance and performs coupled estimation and sparsification across both pre-training and fine-tuning scenarios. Evaluations demonstrate substantial improvements: EcoSpa enables efficient training of LLaMA-1B with 50% memory reduction and 21% faster training, achieves $2.2 imes$ model compression on GPT-2-Medium with $2.4$ lower perplexity, and delivers $1.6 imes$ inference speedup. The approach uses standard PyTorch operations, requiring no custom hardware or kernels, making efficient transformer training accessible on commodity hardware.