GradMAP: Faster Layer Pruning with Gradient Metric and Projection Compensation

📅 2026-02-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high computational cost and inter-layer redundancy inherent in large language models, where existing pruning methods struggle to balance efficiency with performance recovery. To this end, the authors propose GradMAP, an efficient two-stage layer pruning approach. First, a global layer importance metric is derived from gradient magnitudes obtained via a single backward pass. Subsequently, a projection compensation matrix is introduced to correct, in one step, the feature mean shift induced by pruning. GradMAP achieves an average fourfold acceleration in pruning speed while matching or surpassing the performance of current state-of-the-art methods, thereby significantly enhancing both pruning efficiency and scalability.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) exhibit strong reasoning abilities, but their high computational costs limit their practical deployment. Recent studies reveal significant redundancy in LLMs layers, making layer pruning an active research topic. Layer pruning research primarily focuses on two aspects: measuring layer importance and recovering performance after pruning. Unfortunately, the present works fail to simultaneously maintain pruning performance and efficiency. In this study, we propose GradMAP, a faster layer pruning method with \textbf{Grad}ient \textbf{M}etric \textbf{A}nd \textbf{P}rojection compensation, which consists of two stages. In the first stage, we introduce a novel metric based on gradient magnitudes, enabling a global assessment of layer importance. Note that, it requires only a single backward propagation step per pruning decision, substantially enhancing pruning efficiency. In the second stage, we first analyze the layers with the largest mean shift resulting from pruning, and then incorporate a simple yet effective projection compensation matrix to correct this drift in one step. In this way, the degradation of model performance caused by layer pruning is effectively alleviated. Extensive experiments show that GradMAP outperforms previous layer pruning methods in both pruning speed (achieving an average $4\times$ speedup) and performance.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
layer pruning
computational cost
model redundancy
pruning efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

layer pruning
gradient metric
projection compensation
LLM efficiency
model compression
🔎 Similar Papers
No similar papers found.
Hao Liu
Hao Liu
CASIA
Face Recognition
G
Guangyan Li
Institute of Automation, Chinese Academy of Sciences; School of Artificial Intelligence, University of Chinese Academy of Sciences
W
Wensheng Zhang
Guangzhou University
Y
Yongqiang Tang
Institute of Automation, Chinese Academy of Sciences