🤖 AI Summary
This work addresses the high computational cost and inter-layer redundancy inherent in large language models, where existing pruning methods struggle to balance efficiency with performance recovery. To this end, the authors propose GradMAP, an efficient two-stage layer pruning approach. First, a global layer importance metric is derived from gradient magnitudes obtained via a single backward pass. Subsequently, a projection compensation matrix is introduced to correct, in one step, the feature mean shift induced by pruning. GradMAP achieves an average fourfold acceleration in pruning speed while matching or surpassing the performance of current state-of-the-art methods, thereby significantly enhancing both pruning efficiency and scalability.
📝 Abstract
Large Language Models (LLMs) exhibit strong reasoning abilities, but their high computational costs limit their practical deployment. Recent studies reveal significant redundancy in LLMs layers, making layer pruning an active research topic. Layer pruning research primarily focuses on two aspects: measuring layer importance and recovering performance after pruning. Unfortunately, the present works fail to simultaneously maintain pruning performance and efficiency. In this study, we propose GradMAP, a faster layer pruning method with \textbf{Grad}ient \textbf{M}etric \textbf{A}nd \textbf{P}rojection compensation, which consists of two stages. In the first stage, we introduce a novel metric based on gradient magnitudes, enabling a global assessment of layer importance. Note that, it requires only a single backward propagation step per pruning decision, substantially enhancing pruning efficiency. In the second stage, we first analyze the layers with the largest mean shift resulting from pruning, and then incorporate a simple yet effective projection compensation matrix to correct this drift in one step. In this way, the degradation of model performance caused by layer pruning is effectively alleviated. Extensive experiments show that GradMAP outperforms previous layer pruning methods in both pruning speed (achieving an average $4\times$ speedup) and performance.