LLM-Barber: Block-Aware Rebuilder for Sparsity Mask in One-Shot for Large Language Models

πŸ“… 2024-08-20
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 2
✨ Influential: 1
πŸ“„ PDF
πŸ€– AI Summary
Existing LLM pruning methods overlook the dynamic drift of weight importance during pruning, leading to performance degradation. This work proposes a one-shot sparse mask reconstruction framework that requires no retraining. First, we introduce a novel block-level error propagation optimization mechanism to enable coordinated sparsification across Self-Attention and MLP modules. Second, we propose a gradient-weighted weight importance metric to dynamically capture importance drift throughout pruning. Third, we integrate block-aware error compensation with joint structural optimization to achieve true zero-shot retraining. Evaluated on LLaMA and OPT families (7B–13B), our method completes pruning in just 30 minutes on a single A100 GPU. It consistently outperforms state-of-the-art approaches in perplexity and zero-shot transfer tasksβ€”without any fine-tuning or retraining.

Technology Category

Application Category

πŸ“ Abstract
Large language models (LLMs) have grown significantly in scale, leading to a critical need for efficient model pruning techniques. Existing post-training pruning techniques primarily focus on measuring weight importance on converged dense models to determine salient weights to retain. However, they often overlook the changes in weight importance during the pruning process, which can lead to performance degradation in the pruned models. To address this issue, we present LLM-Barber (Block-Aware Rebuilder for Sparsity Mask in One-Shot), a novel one-shot pruning framework that rebuilds the sparsity mask of pruned models without any retraining or weight reconstruction. LLM-Barber incorporates block-aware error optimization across Self-Attention and MLP blocks, ensuring global performance optimization. Inspired by the recent discovery of prominent outliers in LLMs, LLM-Barber introduces an innovative pruning metric that identifies weight importance using weights multiplied by gradients. Our experiments show that LLM-Barber can efficiently prune models like LLaMA and OPT families with 7B to 13B parameters on a single A100 GPU in just 30 minutes, achieving state-of-the-art results in both perplexity and zero-shot performance across various language benchmarks. Code is available at https://github.com/YupengSu/LLM-Barber.
Problem

Research questions and friction points this paper is trying to address.

Efficient pruning for large language models without retraining
Block-aware error optimization in Self-Attention and MLP blocks
Accurate weight importance identification using weights and gradients product
Innovation

Methods, ideas, or system contributions that make the work stand out.

One-shot pruning without retraining or reconstruction
Block-aware error optimization for global performance
Weight-gradient product as pruning metric
πŸ”Ž Similar Papers
No similar papers found.
Yupeng Su
Yupeng Su
UC Santa Barbara
Efficient LLMsEdge DeploymentQuantizationSparsity
Z
Ziyi Guan
Department of Electrical and Electronic Engineering, University of Hong Kong, Hong Kong, China
Xiaoqun Liu
Xiaoqun Liu
Michigan State University
T
Tianlai Jin
School of Microelectronics, Southern University of Science and Technology, Shen Zhen, China
D
Dongkuan Wu
School of Microelectronics, Southern University of Science and Technology, Shen Zhen, China
G
G. Chesi
Department of Electrical and Electronic Engineering, University of Hong Kong, Hong Kong, China
N
Ngai Wong
Department of Electrical and Electronic Engineering, University of Hong Kong, Hong Kong, China
H
Hao Yu
School of Microelectronics, Southern University of Science and Technology, Shen Zhen, China