π€ AI Summary
Existing structured pruning methods for large language models (LLMs) heavily rely on backpropagation, incurring substantial memory and computational overhead. To address this, we propose Bonsaiβthe first fully backpropagation-free, gradient-agnostic forward-pass pruning method for LLMs. Bonsai estimates module importance via forward perturbation analysis and performs module-level structured pruning without gradient computation. On a single NVIDIA A6000 GPU, Bonsai efficiently prunes the 8B-parameter LLaMA-3 model at 50% sparsity: memory consumption is reduced to one-half to one-third of conventional backward-based methods; pruning speed doubles; inference latency improves by 100%; and accuracy remains state-of-the-art. By eliminating dependence on gradient computation, Bonsai significantly broadens the feasibility of deploying compressed LLMs on resource-constrained hardware.
π Abstract
Structured pruning is a promising approach to create smaller, faster LLMs. However, existing methods typically rely on backward passes, which can inflate memory requirements and compute costs. In this work we introduce Bonsai, a gradient-free structured pruning method that eliminates the need for backpropagation, significantly reducing memory requirements and compute costs while achieving state-of-the-art pruning performance. Bonsai uses forward-pass-only perturbative pruning to enable efficient compression of large models on a broader range of hardware configurations. Unlike existing structured pruning approaches, Bonsai not only achieves better compression with fewer resources, but also produces models that are twice as fast as those generated by semi-structured pruning. As a concrete demonstration, we use Bonsai to prune an 8B LLaMA-3 model to 50% sparsity on a single A6000 GPU -- a task infeasible with backprop-based methods, which require 2-3x memory. Our results show that removing backprop as a requirement not only enables pruning larger models on constrained hardware but can also lead to state-of-the-art efficiency and performance.