Everybody Prune Now: Structured Pruning of LLMs with only Forward Passes

πŸ“… 2024-02-08
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 25
✨ Influential: 4
πŸ“„ PDF
πŸ€– AI Summary
Existing structured pruning methods for large language models (LLMs) heavily rely on backpropagation, incurring substantial memory and computational overhead. To address this, we propose Bonsaiβ€”the first fully backpropagation-free, gradient-agnostic forward-pass pruning method for LLMs. Bonsai estimates module importance via forward perturbation analysis and performs module-level structured pruning without gradient computation. On a single NVIDIA A6000 GPU, Bonsai efficiently prunes the 8B-parameter LLaMA-3 model at 50% sparsity: memory consumption is reduced to one-half to one-third of conventional backward-based methods; pruning speed doubles; inference latency improves by 100%; and accuracy remains state-of-the-art. By eliminating dependence on gradient computation, Bonsai significantly broadens the feasibility of deploying compressed LLMs on resource-constrained hardware.

Technology Category

Application Category

πŸ“ Abstract
Structured pruning is a promising approach to create smaller, faster LLMs. However, existing methods typically rely on backward passes, which can inflate memory requirements and compute costs. In this work we introduce Bonsai, a gradient-free structured pruning method that eliminates the need for backpropagation, significantly reducing memory requirements and compute costs while achieving state-of-the-art pruning performance. Bonsai uses forward-pass-only perturbative pruning to enable efficient compression of large models on a broader range of hardware configurations. Unlike existing structured pruning approaches, Bonsai not only achieves better compression with fewer resources, but also produces models that are twice as fast as those generated by semi-structured pruning. As a concrete demonstration, we use Bonsai to prune an 8B LLaMA-3 model to 50% sparsity on a single A6000 GPU -- a task infeasible with backprop-based methods, which require 2-3x memory. Our results show that removing backprop as a requirement not only enables pruning larger models on constrained hardware but can also lead to state-of-the-art efficiency and performance.
Problem

Research questions and friction points this paper is trying to address.

Develop gradient-free pruning for LLMs to reduce memory and compute costs
Enable efficient model compression on diverse hardware without backpropagation
Achieve high sparsity pruning on large models with limited GPU resources
Innovation

Methods, ideas, or system contributions that make the work stand out.

Gradient-free structured pruning method
Forward-pass-only perturbative pruning
Efficient compression on diverse hardware
πŸ”Ž Similar Papers
No similar papers found.