🤖 AI Summary
Existing approaches for layer-wise non-uniform sparsity allocation in large language model (LLM) pruning face a trade-off: heuristic methods yield suboptimal performance, while reinforcement learning (RL)-based search incurs prohibitive computational overhead.
Method: This paper proposes an efficient one-shot decoupled RL framework. Its core innovation is the explicit decoupling of policy learning from resource constraint satisfaction, augmented by a curriculum learning mechanism to substantially reduce search complexity.
Contribution/Results: The method achieves global sparsity allocation via a single policy update—eliminating iterative RL optimization. It consistently outperforms strong heuristic baselines on LLaMA, Mistral, and OPT models, delivering higher post-pruning accuracy. Compared to conventional multi-step RL methods, it reduces training cost by an order of magnitude, achieving both high efficiency and state-of-the-art performance in structured LLM pruning.
📝 Abstract
Pruning is an effective method for compressing Large Language Models, but finding an optimal, non-uniform layer-wise sparsity allocation remains a key challenge. While heuristic methods are fast but yield suboptimal performance, more powerful search-based approaches like Reinforcement Learning are often hindered by prohibitive computational costs on large-scale models. To overcome this efficiency barrier, we propose FastForward Pruning. Its core is a decoupled, single-step RL framework that separates policy optimization from the complex budget satisfaction problem. Such a decoupling is crucial for efficiently searching the vast policy space of LLMs. This curriculum-based strategy begins with low-cost, simple tasks and gradually increases in complexity, significantly reducing the search's computational overhead. Evaluated on the LLaMA, Mistral, and OPT model families, our framework discovers pruning policies that achieve superior performance over strong heuristic baselines. Crucially, when compared to other search-based algorithms, our method achieves competitive or superior results at a fraction of the computational cost, demonstrating a clear advantage in search efficiency.