🤖 AI Summary
This study systematically investigates the impact of layer pruning on the generative capabilities of large language models, revealing significant performance degradation in complex reasoning tasks such as multi-step reasoning, mathematical problem solving, and code generation. The authors propose a supervised fine-tuning strategy that leverages self-generated responses to recover performance without access to original pretraining data. Their analysis demonstrates a strong dependence of generative tasks on model depth and quantifies varying sensitivities across task types. Under constrained post-training conditions, the proposed method improves generative performance by 20–30 percentage points over existing approaches while maintaining 90% of baseline accuracy on classification tasks, thereby highlighting both the effectiveness and inherent limitations of the technique.
📝 Abstract
Recent works have shown that layer pruning can compress large language models (LLMs) while retaining strong performance on classification benchmarks with little or no finetuning. However, existing pruning techniques often suffer severe degradation on generative reasoning tasks. Through a systematic study across multiple model families, we find that tasks requiring multi-step reasoning are particularly sensitive to depth reduction. Beyond surface-level text degeneration, we observe degradation of critical algorithmic capabilities, including arithmetic computation for mathematical reasoning and balanced parenthesis generation for code synthesis. Under realistic post-training constraints, without access to pretraining-scale data or compute, we evaluate a simple mitigation strategy based on supervised finetuning with Self-Generated Responses. This approach achieves strong recovery on classification tasks, retaining up to 90\% of baseline performance, and yields substantial gains of up to 20--30 percentage points on generative benchmarks compared to prior post-pruning techniques. Crucially, despite these gains, recovery for generative reasoning remains fundamentally limited relative to classification tasks and is viable primarily at lower pruning ratios. Overall, we characterize the practical limits of layer pruning for generative reasoning and provide guidance on when depth reduction can be applied effectively under constrained post-training regimes.