High-Fidelity Pruning for Large Language Models

๐Ÿ“… 2026-03-09
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the challenge of deploying large language models under stringent computational and memory constraints. Existing Taylor-based pruning methods, which rely on single-label cross-entropy, often degrade performance due to their neglect of global information in the output distribution. To overcome this limitation, we propose a novel neuron importance criterion that integrates the entropy of the modelโ€™s output distribution into the Taylor expansion framework. This approach comprehensively quantifies each neuronโ€™s contribution to the overall predictive capability without requiring a teacher model or incurring the overhead of self-distillation. By incorporating information entropy into Taylor pruning for the first time, our method preserves the fidelity of the output distribution while significantly reducing resource consumption. Extensive zero-shot evaluations on LLaMA and Qwen model families demonstrate substantial improvements over state-of-the-art pruning techniques, achieving high compression rates with minimal performance loss.

Technology Category

Application Category

๐Ÿ“ Abstract
Large Language Models (LLMs) have demonstrated exceptional performance across a wide range of tasks, yet their significant computational and memory requirements present major challenges for deployment. A common approach uses Taylor expansion on the loss function to estimate neuron importance. However, its reliance on one-hot cross entropy loss, a key limitation is that it narrowly assesses importance based only on the probability assigned to the single predicted next token, thereby ignoring the other potential predictions of the original model. An intuitive solution to address this is to employ self distillation criterion for importance evaluation. However, this approach introduces significant computational overhead by requiring a separate teacher model for supervision. To this end, we propose a simple but effective criterion, information entropy of the model's output distribution, to efficiently evaluate importance scores of neurons with Taylor pruning without requirement of additional teacher. Compared to plain cross entropy criterion, it provides a more holistic criterion for Taylor pruning to prune neurons with the least impact on the prediction of model in a global manner, thereby preserving the fidelity of the model's predictive capabilities. Experimental results on extensive zero-shot benchmarks demonstrate that our method consistently outperforms existing pruning methods across the LLaMA and Qwen series models. The source code and trained weights are availabel at https://github.com/visresearch/HFPrune.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Model Pruning
Taylor Expansion
Cross Entropy Loss
Information Fidelity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Taylor pruning
information entropy
large language models
model compression
neuron importance
๐Ÿ”Ž Similar Papers
No similar papers found.