🤖 AI Summary
This work addresses the suboptimal performance of existing large language model pruning methods—such as SparseGPT—that employ a fixed left-to-right pruning order, particularly when weight matrices exhibit column-wise structure. To overcome this limitation, the authors propose ROSE, which presents the first systematic analysis of how pruning order affects model performance. ROSE introduces Hessian-based second-order gradient pruning combined with column- and block-level loss estimation, and designs a two-tier dynamic reordering strategy: columns within each block are reordered by descending loss, while blocks themselves are sorted by their aggregate loss. Additionally, the method defines a “block loss relative range” metric to adaptively detect column-structured layers and trigger global reordering. Experiments on mainstream models including LLaMA-2, LLaMA-3, and Mistral demonstrate that ROSE significantly outperforms SparseGPT and other state-of-the-art pruning approaches.
📝 Abstract
Pruning is widely recognized as an effective method for reducing the parameters of large language models (LLMs), potentially leading to more efficient deployment and inference. One classic and prominent path of LLM one-shot pruning is to leverage second-order gradients (i.e., Hessian), represented by the pioneering work SparseGPT. However, the predefined left-to-right pruning order in SparseGPT leads to suboptimal performance when the weights exhibit columnar patterns. This paper studies the effect of pruning order under the SparseGPT framework. The analyses lead us to propose ROSE, a reordered SparseGPT method that prioritizes weights with larger potential pruning errors to be pruned earlier. ROSE first performs pre-pruning to identify candidate weights for removal, and estimates both column and block pruning loss. Subsequently, two-level reordering is performed: columns within each block are reordered in descending order of column loss, while blocks are reordered based on block loss. We introduce the relative range of block loss as a metric to identify columnar layers, enabling adaptive reordering across the entire model. Substantial empirical results on prevalent LLMs (LLaMA2-7B/13B/70B, LLaMA3-8B, Mistral-7B) demonstrate that ROSE surpasses the original SparseGPT and other counterpart pruning methods. Our code is available at https://github.com/mingluo-su/ROSE.