Probe Pruning: Accelerating LLMs through Dynamic Pruning via Model-Probing

📅 2025-02-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the trade-off between dynamic pruning and performance preservation in large language model (LLM) inference acceleration, this paper proposes Probe Pruning (PP), an online, batch-wise, structured dynamic pruning framework. PP introduces a lightweight, multi-layer forward probing mechanism grounded in residual importance—requiring no fine-tuning or architectural modification—to adaptively identify critical channels per batch. It further proposes a novel PP importance score that jointly incorporates historical activation awareness and structured pruning constraints. Evaluated on LLaMA-2-7B, PP incurs only 1.5% additional FLOPs while achieving inference speedup at 40% pruning rate; its performance degradation-to-runtime reduction ratio surpasses state-of-the-art methods by 2.56×, markedly improving the efficiency–accuracy trade-off.

Technology Category

Application Category

📝 Abstract
We introduce Probe Pruning (PP), a novel framework for online, dynamic, structured pruning of Large Language Models (LLMs) applied in a batch-wise manner. PP leverages the insight that not all samples and tokens contribute equally to the model's output, and probing a small portion of each batch effectively identifies crucial weights, enabling tailored dynamic pruning for different batches. It comprises three main stages: probing, history-informed pruning, and full inference. In the probing stage, PP selects a small yet crucial set of hidden states, based on residual importance, to run a few model layers ahead. During the history-informed pruning stage, PP strategically integrates the probing states with historical states. Subsequently, it structurally prunes weights based on the integrated states and the PP importance score, a metric developed specifically to assess the importance of each weight channel in maintaining performance. In the final stage, full inference is conducted on the remaining weights. A major advantage of PP is its compatibility with existing models, as it operates without requiring additional neural network modules or fine-tuning. Comprehensive evaluations of PP on LLaMA-2/3 and OPT models reveal that even minimal probing-using just 1.5% of FLOPs-can substantially enhance the efficiency of structured pruning of LLMs. For instance, when evaluated on LLaMA-2-7B with WikiText2, PP achieves a 2.56 times lower ratio of performance degradation per unit of runtime reduction compared to the state-of-the-art method at a 40% pruning ratio. Our code is available at https://github.com/Qi-Le1/Probe_Pruning.
Problem

Research questions and friction points this paper is trying to address.

Dynamic pruning for LLM efficiency
Probing crucial weights in batches
Compatibility with existing LLM models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic pruning via model-probing
History-informed strategic pruning
Compatibility without fine-tuning
🔎 Similar Papers
No similar papers found.