🤖 AI Summary
This work addresses the limitations of existing model pruning methods, which are primarily designed for instruction-following large language models and struggle to effectively adapt to reasoning-augmented models that explicitly generate long reasoning chains. The study systematically investigates pruning strategies tailored for both model types, proposing a calibration and recovery pipeline aligned with the original training distribution. It evaluates static depth pruning, static width pruning, and dynamic pruning across 17 tasks, revealing—for the first time—the dependence of pruning efficacy on the underlying reasoning paradigm. The findings indicate that depth pruning is better suited for classification tasks, while width pruning demonstrates greater robustness in generative and reasoning tasks. Moreover, static pruning better preserves reasoning capabilities, whereas dynamic pruning remains challenging for long-chain reasoning scenarios.
📝 Abstract
Large language models (LLMs) are increasingly costly to deploy, motivating extensive research on model pruning. However, most existing studies focus on instruction-following LLMs, leaving it unclear whether established pruning strategies transfer to reasoning-augmented models that explicitly generate long intermediate reasoning traces. In this work, we conduct a controlled study of pruning for both instruction-following ($\textbf{LLM-instruct}$) and reasoning-augmented ($\textbf{LLM-think}$) models. To isolate the effects of pruning, we align pruning calibration and post-pruning recovery data with each model's original training distribution, which we show yields more stable and reliable pruning behavior. We evaluate static depth pruning, static width pruning, and dynamic pruning across 17 tasks spanning classification, generation, and reasoning. Our results reveal clear paradigm-dependent differences: depth pruning outperforms width pruning on classification tasks, while width pruning is more robust for generation and reasoning. Moreover, static pruning better preserves reasoning performance, whereas dynamic pruning excels on classification and generation but remains challenging for long-chain reasoning. These findings underscore the need for pruning strategies that explicitly account for the distinct characteristics of reasoning-augmented LLMs. Our code is publicly available at https://github.com/EIT-NLP/LRM-Pruning.